javaniceday.com

  • Home
  • AboutAbout me
  • Subscribe
  • SalesforceSalesforce related content
  • Node JSNodejs related content
  • JavaJava related content
  • Electric Vehicles
  • Autos Eléctricos
  • Estaciones de carga UTE
  • Mapa cargadores autos eléctricos en Uruguay
  • How to create files in Salesforce for testing purposes

    January 29th, 2023

    The first question I asked myself when starting to write unit tests that involve files is: how do I upload files from my tests? Fortunately, dealing with files in Salesforce in a unit test context, is pretty easy.

    Creating unit tests in Salesforce is a great way to ensure accurate data and maintain the integrity of Salesforce’s software applications. In this post, we’ll take a look at how to quickly and easily create text files in Salesforce to use them in your unit tests.

    We are going to work with the new Salesforce files model and not with the old approach called “Notes and Attachments”.
    If you still have Note and Attachments and you want to convert to Salesforce Files I recommend you install
    Magic Mover for Notes And Attachments to Lightning Experience

    Straight to the point

    Instantiate a ContentVersion with a name, description and a small content. In this case I will create a TXT file to keep it simple.

          
    ContentVersion cv = new ContentVersion();
    cv.Description  = 'test description';
    cv.PathOnClient ='test_file.txt';
    cv.Title = 'test file '+DateTime.now();
    cv.versiondata=Blob.valueOf('test file body');
    insert cv;
         
        

    After creating the file we want to relate to one or many existing records such as an Account, Opportunity or even a custom object record.
    To do that we have to insert a ContentDocumentLink

          
    ContentDocumentLink cdl = new ContentDocumentLink();
    cdl.ContentDocumentId = [SELECT Id, ContentDocumentId FROM ContentVersion WHERE Id =: cv.Id].ContentDocumentId;
    cdl.LinkedEntityId ='ANY ID'; // <----- put your record id here, example: an account tid
    cdl.ShareType = 'V';
    insert cdl;
    
          
        

    Now go to the associated record and see the file attachment. You will have to add the Files related list to the layout in case you don’t have it yet.

    Photo by Christin Hume on Unsplash

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • How to build a sitemap xml in Node js using Express

    January 28th, 2023

    Creating a sitemap.xml file in Node.js using Express is a fairly straightforward process. In this post, I will go over the necessary steps to generate and set up a sitemap.xml file for your Node.js web application.
    To get started, you’ll need to install the necessary dependencies. This includes installing on of the most used frameworks in Node.js: Express.

    A good sitemap.xml will help you a lot in terms of SEO. It’s a nice starting point if you want to index your site. It’s just a standard XML file that search engines understand.

    Can be as simple as this one:

    <?xml version="1.0" encoding="UTF-8"?>
    <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
       <url>
          <loc>https://atomic-temporary-126131578.wpcomstaging.com/</loc>
          <lastmod>2019-11-18</lastmod>
          <changefreq>monthly</changefreq>
          <priority>0.8</priority>
       </url>
    </urlset>

    You can see the complete list of tags definitions here: https://www.sitemaps.org/protocol.html

    Let’s code a Node.js web app using Express.js framework to expose an endpoint that prints a well-formed sitemap XML

    Install Express framework

    Install Express js globally to take advantage of Express generator

    sudo npm install express -g

    Run this command to generate a default Express structure

    express --no-view --git node-sitemap-xml

    You should see something like this. It could be different depending on your Express version.

    create : node-sitemap-xml/
    create : node-sitemap-xml/public/
    create : node-sitemap-xml/public/javascripts/
    create : node-sitemap-xml/public/images/
    create : node-sitemap-xml/public/stylesheets/
    create : node-sitemap-xml/public/stylesheets/style.css
    create : node-sitemap-xml/routes/
    create : node-sitemap-xml/routes/index.js
    create : node-sitemap-xml/routes/users.js
    create : node-sitemap-xml/public/index.html
    create : node-sitemap-xml/.gitignore
    create : node-sitemap-xml/app.js
    create : node-sitemap-xml/package.json
    create : node-sitemap-xml/bin/
    create : node-sitemap-xml/bin/www

    change directory:

    cd node-sitemap-xml 

    Install dependencies

    npm install 

    Run the web app

    DEBUG=node-sitemap-xml:* npm start   

    Enter to the created folder

    cd node-sitemap-xml

    Install dependencies

    npm install

    Let’s add a dependency to convert JavaScript objects to XML

    npm install js2xmlparser --save

    Add moment.js

    Add Moment js as a dependency to deal with dates. Moment js it’s an awesome module to parse, format and manipulate dates. Strongly recommended

    npm install moment --save

    Create a new file called sitemap.js at the routes folder and paste this content.

    sitemap.xml generation route

          
    const express = require("express");
    const router = express.Router();
     
    const js2xmlparser = require("js2xmlparser");
    const moment = require("moment");
     
    /**
     * It generates a standard sitemal.xml for SEO purposes
     */
    router.get("/", function(req, res, next) {
        try {
            //our records to index
            const records = getRecordsFromDataSource();
            const collection = [];
            let today = moment();
            today = today.format("YYYY-MM-DD");
            //add site root url
            const rootUrl = {};
            rootUrl.loc = "https://atomic-temporary-126131578.wpcomstaging.com/";
            rootUrl.lastmod = today;
            rootUrl.changefreq = "daily";
            rootUrl.priority = "1.0";
            rootUrl["image:image"] = {
                "image:loc": "s://javaniceday.com/default-image.jpg",
                "image:caption":
                    "javaniceday.com. Software development blog. Java, Node JS, Salesforce among other technologies",
            };
            collection.push(rootUrl);
     
            //add recipes urls
            for (let i = 0; i < records.length; i++) {
                const url = {};
                url.loc = records[i].url;
                url.lastmod = records[i].updated_at;
                url["image:image"] = {
                    "image:loc": records[i].featured_image_url,
                    "image:caption": records[i].description,
                };
     
                collection.push(url);
            }
            const col = {
                "@": {
                    xmlns: "http://www.sitemaps.org/schemas/sitemap/0.9",
                    "xmlns:image": "http://www.google.com/schemas/sitemap-image/1.1",
                },
                url: collection,
            };
            const xml = js2xmlparser.parse("urlset", col);
            res.set("Content-Type", "text/xml");
            res.status(200);
            res.send(xml);
        } catch (e) {
            next(e);
        }
    });
     
    /**
     * @return a collection to index (typically we'll get these records from our database)
     */
    function getRecordsFromDataSource() {
        //these records will have our own structure, we return as they are and later we convert them to the xml standard format
        //so let's just define two records hard-coded
     
        const record1 = {
            url: "https://www.javaniceday.com/2019/07/11/better-queue-in-node-js/",
            description:
                "Introduction A good practice in software development is to delegate as much heavy work as possible to background jobs",
            featured_image_url: "https://atomic-temporary-126131578.wpcomstaging.com/example1.jpg",
            updated_at: "2019-07-11",
        };
        const record2 = {
            url: "https://www.javaniceday.com/2019/08/11/http-auth-basic-in-node-js-and-express/",
            description: "A small site in Node.js using Express that will have one protected page Http auth basic prompt",
            featured_image_url: "https://atomic-temporary-126131578.wpcomstaging.com/example1.jpg",
            updated_at: "2019-07-11",
        };
        return [record1, record2];
    }
     
    module.exports = router;

    Basically, that code builds dynamically our sitemap XML.

    • First, it creates a block for the home page and then iterates on our records that can be a mix of different types of records such as recipes, news, cars, etc.
    • You will have to modify it as necessary.
    • Have in mind pagination if you have a big collection of records.

    As a last step modify the file app.js to add our route:<br>const sitemapRouter = require("./routes/sitemap");<br>// (...)<br>app.use("/sitemap.xml", sitemapRouter);

    Run locally

    npm start

    Open your browser at http://localhost:3000/sitemap.xml and you will see this output:

          
    <?xml version='1.0'?>
    <urlset xmlns='http://www.sitemaps.org/schemas/sitemap/0.9' xmlns:image='http://www.google.com/schemas/sitemap-image/1.1'>
        <url>
            <loc>https://www.javaniceday.com/</loc>
            <lastmod>2019-11-18</lastmod>
            <changefreq>daily</changefreq>
            <priority>1.0</priority>
            <image:image>
                <image:loc>s://www.javaniceday.com/default-image.jpg</image:loc>
                <image:caption>javaniceday.com. Software development blog. Java, Node JS, Salesforce among other technologies</image:caption>
            </image:image>
        </url>
        <url>
            <loc>https://www.javaniceday.com/2019/07/11/better-queue-in-node-js/</loc>
            <lastmod>2019-07-11</lastmod>
            <image:image>
                <image:loc>https://www.javaniceday.com/example1.jpg</image:loc>
                <image:caption>Introduction A good practice in software development is to delegate as much heavy work as possible to background jobs</image:caption>
            </image:image>
        </url>
        <url>
            <loc>https://www.javaniceday.com/2019/08/11/http-auth-basic-in-node-js-and-express/</loc>
            <lastmod>2019-07-11</lastmod>
            <image:image>
                <image:loc>https://atomic-temporary-126131578.wpcomstaging.com/example1.jpg</image:loc>
                <image:caption>A small site in Node.js using Express that will have one protected page Http auth basic prompt</image:caption>
            </image:image>
        </url>
    </urlset>
          
        

    After deploying the code in production you’ll want to give visibility to your sitemap.xml. I recommend you submit your URL to the most used (nowadays) search engine: Google.

    Go to https://search.google.com/search-console and submit your URL. Google will inspect it periodically

    See the full source code here: https://github.com/andrescanavesi/node-js-sitemap

    Related posts

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • Upload large files to S3 using Node.js

    January 28th, 2023

    S3 is a powerful object storage service that is offered by Amazon Web Services (AWS). It is popularly used for storing data, and specifically it can also be used as an effective way to store large files. Node.js makes it possible to upload large files to S3 with ease, and in this post, we’ll take a look at how you can do it.


    Uploading large files to Amazon S3 is a common use case that many developers need to solve. If you’re using Node.js, the AWS SDK offers an easy way to upload files to S3 using streams. Streams enable you to read and write large amounts of data without having to store it all in memory.

    Sometimes you need to upload a big file, let’s say larger than 100MB. Streaming from disk must be the approach to avoid loading the entire file into memory.

    To get started, you’ll first need to install the AWS SDK and configure credentials. Once that’s done, you can use the createReadStream function to read a file into a stream and the S3 createMultipartUpload function

    AWS API provides methods to upload a big file in parts (chunks).

    The main steps are:

    • Let the API know that we are going to upload a file in chunks
    • Stream the file from disk and upload each chunk
    • Let the API know all the chunks were uploaded
    /**
         *
         * @param {string} fileName the name in S3
         * @param {string} filePath the absolute path to our local file
         * @return the final file name in S3
         */
        async function uploadToS3(fileName, filePath) {
            if (!fileName) {
                throw new Error('the fileName is empty');
            }
            if (!filePath) {
                throw new Error('the file absolute path is empty');
            }
           
            const fileNameInS3 = `/some/sub/folder/${fileName}`; // the relative path inside the bucket
            console.info(`file name: ${fileNameInS3} file path: ${filePath}`);
    
            if (!fs.existsSync(filePath)) {
                throw new Error(`file does not exist: ${filePath}`);
            }
    
            const bucket = 'my-bucket';
            const s3 = new AWS.S3();
            const statsFile = fs.statSync(filePath);
            console.info(`file size: ${Math.round(statsFile.size / 1024 / 1024)}MB`);
    
            //  Each part must be at least 5 MB in size, except the last part.
            let uploadId;
            try {
                const params = {
                    Bucket: bucket,
                    Key: fileNameInS3,
                };
                const result = await s3.createMultipartUpload(params).promise();
                uploadId = result.UploadId;
                console.info(`csv ${fileNameInS3} multipart created with upload id: ${uploadId}`);
            } catch (e) {
                throw new Error(`Error creating S3 multipart. ${e.message}`);
            }
    
            const chunkSize = 10 * 1024 * 1024; // 10MB
            const readStream = fs.createReadStream(filePath); // you can use a second parameter here with this option to read with a bigger chunk size than 64 KB: { highWaterMark: chunkSize }
    
            // read the file to upload using streams and upload part by part to S3
            const uploadPartsPromise = new Promise((resolve, reject) => {
                const multipartMap = { Parts: [] };
    
                let partNumber = 1;
                let chunkAccumulator = null;
    
                readStream.on('error', (err) => {
                    reject(err);
                });
    
                readStream.on('data', (chunk) => {
                    // it reads in chunks of 64KB. We accumulate them up to 10MB and then we send to S3
                    if (chunkAccumulator === null) {
                        chunkAccumulator = chunk;
                    } else {
                        chunkAccumulator = Buffer.concat([chunkAccumulator, chunk]);
                    }
                    if (chunkAccumulator.length > chunkSize) {
                        // pause the stream to upload this chunk to S3
                        readStream.pause();
    
                        const chunkMB = chunkAccumulator.length / 1024 / 1024;
                    
                        const params = {
                            Bucket: bucket,
                            Key: fileNameInS3,
                            PartNumber: partNumber,
                            UploadId: uploadId,
                            Body: chunkAccumulator,
                            ContentLength: chunkAccumulator.length,
                        };
                        s3.uploadPart(params).promise()
                            .then((result) => {
                                console.info(`Data uploaded. Entity tag: ${result.ETag} Part: ${params.PartNumber} Size: ${chunkMB}`);
                                multipartMap.Parts.push({ ETag: result.ETag, PartNumber: params.PartNumber });
                                partNumber++;
                                chunkAccumulator = null;
                                // resume to read the next chunk
                                readStream.resume();
                            }).catch((err) => {
                                console.error(`error uploading the chunk to S3 ${err.message}`);
                                reject(err);
                            });
                    }
                });
    
                readStream.on('end', () => {
                    console.info('End of the stream');
                });
    
                readStream.on('close', () => {
                    console.info('Close stream');
                    if (chunkAccumulator) {
                        const chunkMB = chunkAccumulator.length / 1024 / 1024;
    
                        // upload the last chunk
                        const params = {
                            Bucket: bucket,
                            Key: fileNameInS3,
                            PartNumber: partNumber,
                            UploadId: uploadId,
                            Body: chunkAccumulator,
                            ContentLength: chunkAccumulator.length,
                        };
    
                        s3.uploadPart(params).promise()
                            .then((result) => {
                                console.info(`Last Data uploaded. Entity tag: ${result.ETag} Part: ${params.PartNumber} Size: ${chunkMB}`);
                                multipartMap.Parts.push({ ETag: result.ETag, PartNumber: params.PartNumber });
                                chunkAccumulator = null;
                                resolve(multipartMap);
                            }).catch((err) => {
                                console.error(`error uploading the last csv chunk to S3 ${err.message}`);
                                reject(err);
                            });
                    }
                });
            });
    
            const multipartMap = await uploadPartsPromise;
            console.info(`All parts have been upload. Let's complete the multipart upload. Parts: ${multipartMap.Parts.length} `);
    
            // gather all parts' tags and complete the upload
            try {
                const params = {
                    Bucket: bucket,
                    Key: fileNameInS3,
                    MultipartUpload: multipartMap,
                    UploadId: uploadId,
                };
                const result = await s3.completeMultipartUpload(params).promise();
                console.info(`Upload multipart completed. Location: ${result.Location} Entity tag: ${result.ETag}`);
            } catch (e) {
                throw new Error(`Error completing S3 multipart. ${e.message}`);
            }
    
            return fileNameInS3;
        }
          

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • SFDX Error authenticating with auth code due to: grant type not supported

    January 28th, 2023

    SFDX is the Salesforce command-line interface (CLI) used to deploy and manage your Salesforce applications. Unfortunately, an error can occur when authenticating with an auth code due to the grant type not being supported.

    In Salesforce DX, you may experience an error when attempting to authenticate with an authorization code due to the “grant type not supported” error. This error usually occurs when you’re trying to authenticate with wrong instance URL.


    For example let’s say you are trying to login to a sandbox

    sfdx auth:web:login -a myusername@myorg.sandbox --instanceurl=https://mysandbox-domain.lightning.force.com

    And you get this error in console:

    ERROR running auth:web:login: Invalid client credentials. Verify the OAuth client secret and ID. Error authenticating with auth code due to: grant type not supported

    In your browser you get this

    image
    Error authenticating with auth code due to: grant type not supported.
    
    This is most likely not an error with the Salesforce CLI. Please ensure all information is accurate and try again.

    Just in case please check the following before going to the solution

    • Check that your computer’s clock is accurate: Salesforce uses the OAuth protocol for authentication, which relies on accurate timekeeping. If your computer’s clock is off by more than a few minutes, you may encounter authentication errors.
    • Check your network connection: If your network connection is unstable or slow, you may encounter authentication errors. Make sure you have a stable internet connection.
    • Check your org’s settings: Make sure that your org is configured to allow API access and that your user account has the necessary permissions to access the API.
    • Try logging out and logging back in: Sometimes logging out of SFDX and logging back in can help resolve authentication issues.

    The solution

    The problem is you are using lightning.force.com domain instead of my.salesforce.com

    sfdx auth:web:login -a myusername@myorg.sandbox --instanceurl=https://mysandbox-domain.my.salesforce.com
    

    If none of these steps resolve the issue, you may need to reach out to Salesforce support for further assistance.

    Related posts

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • How to connect to a Redshift database from Node.js

    January 28th, 2023

    Redshift popular data warehousing product developed by Amazon Web Services (AWS). It is based on the PostgreSQL database engine and is designed for handling large-scale analytical workloads. Redshift allows you to store, manage, and analyze structured data efficiently, making it ideal for data warehousing and business intelligence applications.

    As a columnar database, Redshift organizes data by column rather than by row. This storage structure enables faster query performance and better compression rates, especially when dealing with large volumes of data. Redshift also offers features such as parallel query execution, automatic data compression, and scalability to handle high-concurrency workloads.

    With its simplicity, scalability, and integration with other AWS services, Redshift has become a popular choice for organizations that need to process and analyze large amounts of data in real-time, allowing them to make data-driven decisions and gain insights into their business operations.

    Connecting to a Redshift database from Node.js is possible, but it involves some steps. Redshift is a popular data warehousing product by Amazon and allows for easy storage and analysis of structured data. With Node.js and Redshift, you can store, manage and analyze your data quickly and easily.

    In this post, I’ll show you how to connect to a Redshift database from Node.js.

    Redshift it’s a Postgres-based database so we can take advantage of pg-promise module

    Code snippet to connect to Redshift from Node.js

    import pgp from "pg-promise";
    
    const connections = [];
    
    export default class Redshift {
      static async getConnection() {
        const dbName = "myDb";
    
        if (!connections[dbName]) {
          const dbUser = "dbUser";
          const dbPassword = "dbPassword";
          const dbHost = "myHost";
          const dbPort = "dbPort";
    
          const dbc = pgp({ capSQL: true });
          console.log(`Opening connection to: ${dbName}, host is: ${dbHost}`);
    
          const connectionString = `postgres://${dbUser}:${dbPassword}@${dbHost}:${dbPort}/${dbName}`;
          connections[dbName] = dbc(connectionString);
        }
    
        return connections[dbName];
      }
    
      static async executeQuery(query) {
        try {
          const date1 = new Date().getTime();
          const connection = await this.getConnection();
          const result = await connection.query(query);
    
          const date2 = new Date().getTime();
          const durationMs = date2 - date1;
          const durationSeconds = Math.round(durationMs / 1000);
          let dataLength = 0;
    
          if (result && result.length) dataLength = result.length;
    
          console.log(
            `[Redshift] [${durationMs}ms] [${durationSeconds}s] [${dataLength.toLocaleString()} records] ${query}`
          );
    
          return result;
        } catch (e) {
          console.error(`Error executing query: ${query} Error: ${e.message}`);
          throw e;
        }
      }
    }

    Resources

    • NPM module pg-promise
    • pg-promise self signed certificate error in Postgres

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • ERROR running force:source:retrieve: versions.map is not a function

    September 5th, 2022

    Just delete the folder .sfdx and reauthenticate again

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • How to get Salesforce limits through SFDX CLI

    August 4th, 2022

          
    sfdx force:limits:api:display
          
        

    Salesdorce docs reference

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • Salesforce Apex method to get the country name by ISO Code

    July 23rd, 2022

    When you’re developing applications in Salesforce, you may need to retrieve the name of a country based on its ISO code. If this is the case, let’s implement an Apex method that you can use to simplify the process.

    The method is called getCountryNameByIsoCode and it takes a string argument that represents the ISO code for a particular country. For example, if you want to retrieve the name of the United States you would use the ISO code US.

    The country ISO code for this method is from the standard alpha-2. Example: UY for Uruguay, AR for Argentina, etc

    public static String getCountryNameByIsoCode(String isoCode){
        if(isoCode == null) return null;
        Schema.DescribeFieldResult fieldResult = User.Countrycode.getDescribe();
        List<Schema.PicklistEntry> pickListValues = fieldResult.getPicklistValues();
        
        for( Schema.PicklistEntry pickListEntry : pickListValues) {
            // pickListEntry.getLabel() returns the country name
            // pickListEntry.getValue() returns the country code
            if(pickListEntry.getValue().toLowerCase() == isoCode.toLowerCase()) {
                return pickListEntry.getLabel();
            }
        }
        return null;
    }

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • Mock node-fetch in node.js with different scenarios

    July 11th, 2022
          
    const getWithNodeFetch = async () => {
        const ops = {
            method: 'GET',
            headers: {
                'Content-Type': 'application/json',
                'Authorization': 'OAuth [some access token]'
            }
        };
        const res = await fetch('https://example.com', ops);
        if(!res.ok) throw new Error(`error doing GET status code: ${res.status} status message ${res.statusText}`);
        return res.json();
    }
       
        
          
    const getFileWithNodeFetch = async () => {
        const ops = {
            method: 'GET',
            headers: {
                'Content-Type': 'application/octet-stream',
                'Authorization': 'OAuth [some access token]'
            }
        };
    
        const fileFullPath = `${os.tmpdir()}/file.txt`;
        const res = await fetch('https://example.com/file.txt', ops);
        const fileStream = fs.createWriteStream(fileFullPath);
    
        if(!res.ok) throw new Error(`error downloading file. status code: ${res.status} status message ${res.statusText}`);
    
        await new Promise((resolve, reject) => {
            res.body.pipe(fileStream);
            res.body.on("error", reject);
            fileStream.on("finish", resolve);
        });
        return  {filePath: fileFullPath};
    
    }
       
        
          
    const fetch = require("node-fetch");
    jest.mock('node-fetch');
          
        
          
    fetch.mockReset();
    fetch.mockImplementation((url, options) => {
      if (options.headers["Content-Type"] == "application/json") {
        return Promise.resolve({
          ok: true,
          status: 200,
          statusText: "ok",
          json: () => {
            return {
              name: "test",
              fileExtension: "txt",
            };
          },
        });
      } else {
        return Promise.resolve({
          ok: false,
          status: 500,
          statusText: "some error",
        });
      }
    });
          
        
          
    fetch.mockReset();
    fetch.mockReturnValue({
        status: 500,
        statusText: "error",
        json: () => ({})
    });
          
        
          
    fetch.mockReset();
    fetch.mockReturnValue({
        status: 404,
        statusText: "File not found",
        json: () => ({})
    });
          
        
          
    fetch.mockReset();
    fetch.mockReturnValue({
        ok: true,
        status: 200,
        statusText: "ok",
        body: {
            pipe: () => ({}),
            on: () => ({})
        },
        json: () => {
            return {
                name: "test",
                fileExtension: "txt"
            }
        }
    });
        
          
        
    Photo by Francesco Gallarotti on Unsplash

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • Get Salesforce ContentVersion file info using node-fetch

    July 7th, 2022

    Salesforce ContentVersion is a standard Salesforce object that represents a specific version of a file or document in Salesforce. It is used to store and manage various types of content, such as attachments, files, or document versions. Each ContentVersion record contains information about the file, including its title, file name, file extension, and other metadata. ContentVersion records can be associated with specific records in Salesforce, such as accounts, opportunities, or custom objects, allowing users to easily access and manage the related files.

    Node-fetch is a JavaScript library that provides an easy and convenient way to make HTTP requests in a Node.js environment. It allows you to send HTTP requests to remote servers and receive responses, making it useful for tasks such as fetching data from APIs or downloading files. Node-fetch simplifies the process of making HTTP requests by providing a simple and intuitive API.


    Let’s say you want to get the file info without downloading it. For example, you want to know the file name and extension. The way to do it is to make a request to:

    https://myinstance-dev-ed.my.salesforce.com/services/data/v52.0/sobjects/ContentVersion/0688J000000Di2xTBA
    First, let’s have a method to get the access token
    
    const getJsForceConnection = async () => {
        const username = "******";
        const password = "******";
        const securityToken = "******";
    
        const salesforceUrl = "https://myinstance-dev-ed.my.salesforce.com";
        const salesforceApiVersion = "52.0";
    
        const options = {instanceUrl: salesforceUrl, loginUrl: salesforceUrl, version: salesforceApiVersion};
       
        const conn = new jsForce.Connection(options);
        await conn.login(username, password + securityToken);
    
        return conn;
    }
          
    And then let’s make a request to get the file info with node-fetch
    
    const getFileInfo = async (conn, salesforceFileId, salesforceApiVersion) => {
        const url = `${conn.instanceUrl}/services/data/v${salesforceApiVersion}/sobjects/ContentVersion/${salesforceFileId}`;
        const ops = {
            method: 'GET',
            headers: {
                'Content-Type': 'application/json',
                'Authorization': 'OAuth '+conn.accessToken
            }
        };
        const res = await fetch(url, ops);
    
        if(!res.ok) throw new Error(`error getting file info from ${url} status code: ${res.status} status message ${res.statusText}`);
    
        const json = await res.json();
        return {
            fileTitle: json.Title,
            fileName: `${json.Title}.${json.FileExtension}`,
            fileExtension: json.FileExtension,
            fileId: salesforceFileId,
        };
    }
          
    If you want to download the file without using node-fetch see
    https://www.javaniceday.com/post/salesforce-rest-api-download-contentversion

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
←Previous Page
1 … 9 10 11 12 13 … 25
Next Page→

  • LinkedIn
  • GitHub
  • WordPress

Privacy PolicyTerms of Use

Website Powered by WordPress.com.

  • Subscribe Subscribed
    • javaniceday.com
    • Already have a WordPress.com account? Log in now.
    • javaniceday.com
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
%d