javaniceday.com

  • Home
  • AboutAbout me
  • Subscribe
  • SalesforceSalesforce related content
  • Node JSNodejs related content
  • JavaJava related content
  • Electric Vehicles
  • Autos Eléctricos
  • Estaciones de carga UTE
  • Mapa cargadores autos eléctricos en Uruguay
  • Surfshark VPN

    January 29th, 2023

    Surfshark VPN is a great choice for those looking for an easy and secure way to access the internet. Its intuitive user interface and wide array of features make it an excellent option for people of all tech levels. With Surfshark VPN, you can enjoy the benefits of strong 256-bit encryption and unlock restricted content from all around the globe. Surfshark VPN also offers a feature-rich app for iOS and Android, so you can stay safe and secure from anywhere.

    Surfshark VPN is a virtual private network (VPN) service that provides online privacy and security by encrypting internet traffic and routing it through a secure server. The service allows users to access the internet securely and privately from anywhere in the world, hiding their location and identity.

    With Surfshark VPN, users can bypass censorship and geo-restrictions, protect their online activities from being monitored or intercepted, and access restricted websites and content. The VPN service also provides features such as a kill switch, which disconnects the internet connection if the VPN connection drops, and multi-hop VPN, which routes the traffic through multiple servers for additional security.

    Surfshark VPN is available for various platforms, including Windows, Mac, iOS, Android, and Linux, and can be used on unlimited devices with a single subscription. The service provides a user-friendly interface, 24/7 customer support, and a 30-day money-back guarantee.

    In summary, Surfshark VPN is a secure and private VPN service that provides online protection and access to restricted content by encrypting internet traffic and routing it through a secure server.

    Signup

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • Image lazy loading natively

    January 29th, 2023

    These days, image lazy loading is an important technique to improve performance on web applications. Image lazy loading simply loads images asynchronously, so that they appear on the page only when the user scrolls close to them.

    Natively supporting lazy loading for images makes the user experience much better and also helps optimize performance. Let’s see how to implement image lazy loading natively with just one attribute from the image HTML tag.

          
    <img src="image.jpg" alt="..." loading="lazy" />
          
        

    Reference

    • Cross-browser native lazy loading in JavaScript

    • Lazy Loading

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • Invoke Apex method from Salesforce Flow Builder

    January 29th, 2023

    In Salesforce, developers can invoke Apex methods from within the Flow Builder to execute code logic. Apex methods provide the capability of customizing Flows as needed, allowing users to create new features, validate data, and access records or web services. This post will walk you through how to invoke an Apex method from the Flow Builder.

    What about Flow Builder

    Flow Builder is the replacement for Process Builder in Salesforce. Both of them are useful to automate several kinds of processes but sometimes the out-of-the-box functionality is not enough and we have to go to a custom one such as some lines in an Apex class. Let’s see then how to invoke an Apex method through the Flow Builder.

    Create Apex class with Invocable method

          
    public class MyFlowClass {
    
        @InvocableMethod(label='My flow method'
                         description='A cool description about this method'
                         category='Account')
        public static void execute(List<id> accountIds){
            System.debug('account ids: '+accountIds);
        }
    }
          </id>
        

    Create the Flow

    Remember to activate the Flow!

    Test the Flow

    Our flow will be executed every time we create or modify any account so let’s do that.
    First of all open the Developer Console and go to the Logs tab. Open an account, edit any field and save it. After that see the log generated and make sure the debug line is present.

    Reference

    • Flow Builder Trailhead
    • Flow Builder Docs

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • Read file with promise in Node.js

    January 29th, 2023

    Promises are a great way to improve your codebase by preventing callback hell and making asynchronous code look more readable. Working with files in Node.js can be done with callbacks if you’d like, but promises can make it easier. In this post, we’ll go over how to read and write files with promises in Node.js.

          
    const {promises: {readFile}} = require("fs");
    
    const content = await readFile('./test.txt', {encoding:'utf8', flag:'r'});
    
          
        

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • How to use node-cache to accelerate queries

    January 29th, 2023

    Node-cache is an easy to use module that provides an in-memory caching system for Node.js applications. It helps speed up query operations by caching the results of often used queries, making it faster for subsequent queries.

    Using node-cache is easy. To get started, install it with npm.

    npm install node-cache

    Database access is usually expensive so you should optimize roundtrips as much as possible.

    You can avoid queries by caching at multiple levels:

    • Using a CDN
    • Caching at browser level

    Another option it to cache right before running queries. It can be easy as “ok, I’m about to run this query for these parameters, did I execute it recently?”

    How to use node-cache module to cache queries

    We are going to build a small piece of code to show how to use node-cache module.
    It will be used to cache a SQL query result but it can be used to cache any other result.

          
    const { Pool } = require('pg');
    const NodeCache = require('node-cache');
    const crypto = require('crypto');
    
    const log4js = require('log4js');
    
    const queryCache = new NodeCache();
    const logger = log4js.getLogger('db_helper');
    logger.level = 'info';
    
    let rejectUnauthorized = false;
    if (process.env.NODE_ENV === 'development') {
      rejectUnauthorized = false;
    }
    
    // more options: https://node-postgres.com/api/client
    const timeout = process.env.DB_TIMEOUT || 1000 * 10;
    const pool = new Pool({
      connectionString: process.env.DATABASE_URL,
      statement_timeout: timeout,
      query_timeout: timeout,
      connectionTimeoutMillis: timeout,
      ssl: {
        rejectUnauthorized,
      },
    });
    
    /**
     *
     * @param {stirng} theQuery
     * @param {[]]} bindings
     * @param {boolean} withCache true to cache the result
     * @return {Promise<*>}
     */
    module.exports.query = async function (theQuery, bindings = [], withCache = false) {
      if (withCache) {
        logger.info(`executing query with cache ${theQuery}`);
        const stringToHash = `${theQuery}${JSON.stringify(bindings)}`;
        logger.info(`string to hash: ${stringToHash}`);
        const hash = crypto.createHash('sha256').update(stringToHash).digest('hex');
        logger.info(`hash: ${hash}`);
        const value = queryCache.get(hash);
        if (value === undefined) {
          try {
            logger.info('no cache for this query, let go to the DB');
            const queryResult = await pool.query(theQuery, bindings);
            queryCache.set(hash, queryResult);
            logger.info(`cache set for ${hash}`);
            return queryResult;
          } catch (error) {
            throw new Error(`Error executing query with cache ${theQuery} error: ${error}`);
          }
        } else {
          logger.info(`returning query result from cache ${theQuery}`);
          log.info(queryCache.getStats());
          return value;
        }
      } else {
        try {
          logger.info(`executing query without cache ${theQuery}`);
          const result = await pool.query(theQuery, bindings);
    
          // delete all the cache content if we are inserting or updating data
          const auxQuery = theQuery.trim().toLowerCase();
          if (auxQuery.startsWith('insert') || auxQuery.startsWith('update') || auxQuery.startsWith('delete')) {
            queryCache.flushAll();
            queryCache.flushStats();
            logger.info(`the cache was flushed because of the query ${theQuery}`);
          }
          return result;
        } catch (error) {
          throw new Error(`Error executing query without cache  ${theQuery} error: ${error}`);
        }
      }
    };
    
    module.exports.execute = pool;
    
        
        

    Photo by D.S. Chapman on Unsplash

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • Salesforce integration with AWS AppFlow, S3, Lambda and SQS

    January 29th, 2023

    Salesforce and Amazon’s Web Services (AWS) are two powerful software development and cloud computing tools. In this post, we’ll discuss how these two tools can be integrated for an optimized and efficient workflow.

    The integration of Salesforce and AWS allows businesses to take advantage of the scalability, reliability, and security of both platforms. The integration enables businesses to quickly and efficiently move key data and applications between the cloud platforms and reduces the complexity of integration.

    There are many ways to sync up our Salesforce data with third parties in realtime. One option is a mix of Salesforce and AWS services, specifically Change Data Capture from Salesforce and AppFlow from AWS. We are going to build a Cloudformation yml file with all that we need to deploy our integration on any AWS environment. However can be a good option to do it first by point and click through the AWS console and then translate it into a Cloudformation template.

    If you are using Heroku and Postgres, Heroku Connect is a good option too

    About Salesforce Change Data Capture

    Receive near-real-time changes of Salesforce records, and synchronize corresponding records in an external data store.

    Change Data Capture

    publishes change events, which represent changes to Salesforce records. Changes include creation of a new record, updates to an existing record, deletion of a record, and undeletion of a record.

    Important:
    Change Data Capture does not support relationships at the time this post was written (08/2021). This means you will only be able to sync up beyond your object unless you implement some tricks using Process Builder and Apex. That’s out of the scope of this post and we are going to see it in a different one because requires some extra steps and knowledge.

    To start listening on specific object go to Setup -> Integrations -> Change Data Capture. Move the object you want to the right.

    Advantages of using AppFlow approach

    • Data being transferred securely
    • Credentials are managed by Oauth process
    • No coding required unless you want to run some specific logic for every sync up
    • 100% serverless, pay as use

    Disadvantages of using AppFlow approach

    • The connection must exist before deploying the infrastructure. This is a manual step
    • This approach can take some time to learn and configure, specially if you are already familiar with callouts from Salesforce

    Requirements for Salesforce

    • Your Salesforce account must be enabled for API access. API access is enabled by default for the Enterprise, Unlimited, Developer, and Performance editions.
    • Your Salesforce account must allow you to install connected apps. If this functionality is disabled, contact your Salesforce administrator. After you create a Salesforce connection in Amazon AppFlow, verify that the connected app named Amazon AppFlow Embedded Login App is installed in your Salesforce account.
    • The refresh token policy for the Amazon AppFlow Embedded Login App must be set to Refresh token is valid until revoked. Otherwise, your flows will fail when your refresh token expires.
    • You must enable change data capture in Salesforce to use event-driven flow triggers.
    • If your Salesforce app enforces IP address restrictions, you must grant access to the addresses used by Amazon AppFlow.
    • To create private connections using AWS PrivateLink, you must enable both Manager Metadata and Manage External Connections user permissions in your Salesforce account. Private connections are currently available in the us-east-1 and us-west-2 AWS Regions.

    Architecture for the solution

    Let say we want to listen to changes on Account object. Every time a new Account is created or updated there will be an event to AppFlow through Salesforce Data Capture.

    We could add some logic in the Lambda function to decide if we are interested in that change or not.

    How to create the Salesforce Oauth Connection

    As we said, an Oauth connection must exist before deploying our stack to AWS. This is something we have to create by hand. If we deal with different environments in AWS, we can create as many connection as we want pointing to our different Salesforce instances.

    • Open your AWS console and go to Amazon App Flow
    • Go to View Flows and click on Connections
    • Click on Create Connection. Select production in case you have a dev org. Provide a connection name
    • Once you click on Continue, a Salesforce popup will be open. Put your Salesforce credentials to login
    • After that your connection will be created and available to use

    The Cloudformation template

          
    # Commands to deploy this through SAM CLI
    #  sam build
    #  sam deploy --no-confirm-changeset
    
    AWSTemplateFormatVersion: 2010-09-09
    Description: >-
      app flow lambda + s3 + SQS
    
    Transform:
      - AWS::Serverless-2016-10-31
    
    Parameters:
      Environment:
        Type: String
        Description: Environment name. Example, dev,staging,testing, etc
    
    Globals:
      Function:
        Runtime: nodejs12.x
        Timeout: 30
        MemorySize: 128
    
    
    Resources:
      MyLambda:
        Type: AWS::Serverless::Function
        DependsOn:
          - "MyQueue"
        Properties:
          Handler: src/handlers/my.handler
          Description: Sync up lambda
          Environment:
            Variables:
              QueueURL:
                Ref: "MyQueue"
              MyBucket: !Sub "${AWS::AccountId}-${Environment}-my-bucket"
          Role:
            Fn::GetAtt:
              - "MyLambdaRole"
              - "Arn"
        Tags:
          Name: !Sub "${Environment}-my-lambda"
    
      MyLambdaRole:
        Type: AWS::IAM::Role
        Properties:
          AssumeRolePolicyDocument:
            Statement:
              - Effect: Allow
                Action: "sts:AssumeRole"
                Principal:
                  Service:
                    - "lambda.amazonaws.com"
            Version: "2012-10-17"
          ManagedPolicyArns:
            - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
          Policies:
            - PolicyName: AccessOnMyQueue
              PolicyDocument:
                Version: 2012-10-17
                Statement:
                  - Effect: Allow
                    Action: "sqs:SendMessage"
                    Resource:
                      - Fn::GetAtt:
                          - "MyQueue"
                          - "Arn"
            - PolicyName: AccessToS3Notifications
              PolicyDocument:
                Version: 2012-10-17
                Statement:
                  - Effect: Allow
                    Action:
                      - 's3:GetBucketNotification'
                    Resource: !Sub 'arn:aws:s3:::${AWS::AccountId}-${Environment}-my-bucket'
            - PolicyName: AccessOnS3Objects
              PolicyDocument:
                Version: 2012-10-17
                Statement:
                  - Effect: Allow
                    Action:
                      - "s3:GetObject"
                    Resource: !Sub 'arn:aws:s3:::${AWS::AccountId}-${Environment}-my-bucket/*'
    
    
      MyBucket:
        Type: AWS::S3::Bucket
        DependsOn:
          - MyLambda
        Properties:
          BucketName: !Sub "${AWS::AccountId}-${Environment}-my-bucket"
          NotificationConfiguration:
            LambdaConfigurations:
              - Event: 's3:ObjectCreated:*'
                Function: !GetAtt MyLambda.Arn
          LifecycleConfiguration:
            Rules:
              - Id: ExpirationInDays
                Status: 'Enabled'
                ExpirationInDays: 3
              - Id: NoncurrentVersionExpirationInDays
                Status: 'Enabled'
                NoncurrentVersionExpirationInDays: 3
    
      MyBucketPolicy:
        Type: AWS::S3::BucketPolicy
        DependsOn: MyBucket
        Properties:
          Bucket: !Ref MyBucket
          PolicyDocument:
            Version: '2008-10-17'
            Statement:
              - Effect: Allow
                Principal:
                  Service: appflow.amazonaws.com
                Action:
                  - s3:PutObject
                  - s3:AbortMultipartUpload
                  - s3:ListMultipartUploadParts
                  - s3:ListBucketMultipartUploads
                  - s3:GetBucketAcl
                  - s3:PutObjectAcl
                Resource:
                  - !Sub "arn:aws:s3:::${AWS::AccountId}-${Environment}-my-bucket"
                  - !Sub "arn:aws:s3:::${AWS::AccountId}-${Environment}-my-bucket/*"
    
      MyQueue:
        Type: AWS::SQS::Queue
        Properties:
          QueueName: !Sub "${Environment}-my-queue.fifo"
          FifoQueue: true
          ContentBasedDeduplication: true
          RedrivePolicy:
            deadLetterTargetArn:
              Fn::GetAtt:
                - "MyDeadLetterQueue"
                - "Arn"
            maxReceiveCount: 2
    
      MyDeadLetterQueue:
        Type: AWS::SQS::Queue
        Properties:
          QueueName: !Sub "${Environment}-my-queue-dlq.fifo"
          FifoQueue: true
          MessageRetentionPeriod: 1209600 # 14 days (the max supported)
    
      MyQueuePolicy:
        DependsOn:
          - "MyQueue"
        Type: AWS::SQS::QueuePolicy
        Properties:
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Principal:
                  Service:
                    - "events.amazonaws.com"
                    - "sqs.amazonaws.com"
                Action:
                  - "sqs:SendMessage"
                  - "sqs:GetQueueUrl"
                  - "sqs:DeleteMessage"
                  - "sqs:ReceiveMessage"
                Resource:
                  Fn::GetAtt:
                    - "MyQueue"
                    - "Arn"
          Queues:
            - Ref: "MyQueue"
    
      # AppFlow flow to connect SFDC and AWS
      MyAppFlow:
        Type: AWS::AppFlow::Flow
        Properties:
          FlowName: !Sub "${Environment}-my-app-flow"
          Description: Flow to sync up with Salesforce
          TriggerConfig:
            TriggerType: Event
          SourceFlowConfig:
            ConnectorType: Salesforce
            ConnectorProfileName: !Sub "${Environment}-my-connection" # the name of the Oauth connection created in AWS console
            SourceConnectorProperties:
              Salesforce:
                Object: Account__ChangeEvent
                EnableDynamicFieldUpdate: false
                IncludeDeletedRecords: true
          DestinationFlowConfigList:
            - ConnectorType: S3
              DestinationConnectorProperties:
                S3:
                  BucketName: !Ref MyBucket
                  S3OutputFormatConfig:
                    AggregationConfig:
                      AggregationType: None
                    PrefixConfig:
                      PrefixFormat: MINUTE
                      PrefixType: FILENAME
                    FileType: JSON
          Tasks:
            - TaskType: Filter
              ConnectorOperator:
                Salesforce: PROJECTION
              SourceFields:
                - Name
            - TaskType: Map
              SourceFields:
                - Name
              TaskProperties:
                - Key: SOURCE_DATA_TYPE
                  Value: Name
                - Key: DESTINATION_DATA_TYPE
                  Value: Name
              DestinationField: Name
    
          
        

    Debugging

    It’s important we have a way to troubleshoot in case things go wrong. Since this integration deals with different AWS services, we have to see what we have available in each one.

    • AppFlow run history
    • CloudWatch for our Lambda
    • Spy on S3 to see objects created
    • Spy on SQS messages created (monitor tab)

    Resources

    • https://docs.aws.amazon.com/appflow/latest/userguide/salesforce.html
    • https://developer.salesforce.com/docs/atlas.en-us.change_data_capture.meta/change_data_capture/cdc_intro.htm

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • Mock AWS SDK with Jest in TypeScript

    January 29th, 2023
    The AWS SDK is an incredibly powerful tool for managing resources in the cloud. But what if you want to use the SDK without having to deploy your code to AWS?

    Mocking the SDK with Jest and TypeScript is a great way to quickly and easily test code that interacts with AWS services. By mocking the SDK, you can simulate requests and responses to ensure that everything works as expected.

    This tutorial will walk you through how to create a mock of the AWS SDK and how to use it with Jest and TypeScript

    When writing automated tests for applications that use Amazon Web Services (AWS) APIs, it can be difficult to test code that interacts with the AWS SDK. In order to make testing easier, you can use Jest in combination with TypeScript to mock the AWS SDK and simulate responses from the AWS service. This makes it possible to create reliable tests for your application without actually calling AWS APIs.

    Advantages of mocking the AWS SDK

    • Avoid hitting AWS services will make you save so much money, especially if you have thousands of tests. Even if you use Localstack, your project’s configuration from checking out the code until you run it will take more time.
    • You tests will run faster

    Disadvantages of mocking the AWS SDK

    • Slow down development process
    • Mocks for almost every scenario, otherwis,e it will call the real code (what we want to avoid)

    Let’s see an example of how to mock the AWS SDK with Jest and TypeScript

    The following code mocks two methods of SQS: receiveMessage and deleteMessage. If your code uses more methods of the AWS SDK you will have to mock all of them. Otherwise, your tests will call the real code.

    import AWS from "aws-sdk";
    jest.mock("aws-sdk");
    const mockAws = AWS as jest.Mocked;
    
    const mySQSMock = {
      receiveMessage: () => {
        return {
          promise: () =>
            Promise.resolve({
              Messages: [{ Body: "test" }],
            }),
        };
      },
      deleteMessage: () => {
        return {
          promise: () => Promise.resolve(),
        };
      },
    };
    
    describe("My test suite", () => {
      beforeAll(() => {
        mockAws.SQS.mockImplementation(jest.fn().mockImplementation(() => mySQSMock));
      });
    
      test("My test", async () => {
        // TBD
      });
    });

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • redis delete all keys with prefix

    January 29th, 2023

    Redis, one of the most popular open-source in-memory databases, provides a simple and straightforward way to delete all keys with a given prefix. This can be done with the help of the **KEYS** and **DEL** commands. First, use **KEYS** to find all keys with a given prefix. This command will search all keys within the given database and return all matching keys in an array. Next, use **DEL** to delete all these keys. For example, if you wanted to delete all

    Delete all the keys by a given prefix in a Redis cluster.

    Let’s ping our Redis instance

       
    redis-cli -h myhost.com -p 6379 ping
       
    

    Set some keys

       
    redis-cli -h myhost.com -p 6379 SET dev1 "val1"
    redis-cli -h myhost.com -p 6379 SET dev2 "val2"
    redis-cli -h myhost.com -p 6379 SET dev3 "val3"
       
    

    Get one key

       
    redis-cli -h myhost.com -p 6379 KEYS dev1 
       
    

    Delete one key

       
    redis-cli -h myhost.com -p 6379 DEL dev1 
       
    

    Now let’s go with our massive deletion algorithm but before making any deletion let’s test the algorithm without making changes.

       
    for key in `echo 'KEYS dev*' | redis-cli -c -h myhost.com -p 6379 | awk '{print $1}'`
      do echo KEYS $key
    done | redis-cli -c -h myhost.com -p 6379
       
    

    And then when you are sure, go ahead with the deletion

       
    for key in `echo 'KEYS dev*' | redis-cli -c -h myhost.com -p 6379 | awk '{print $1}'`
      do echo DEL $key
    done | redis-cli -c -h myhost.com -p 6379
       
    

    In case you are not using a cluster just remove the -c options from redis-cli

    Photo by Sam Pak on Unsplash

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • Handlebars and AWS Lambda example

    January 29th, 2023

    In today’s post, we’ll explore how to use Handlebars with AWS Lambda, and how this can help you build dynamic and customizable web applications.

    Firstly, let’s take a quick look at what Handlebars is. It’s a popular templating engine that allows you to define dynamic templates using a simple syntax. Handlebars templates are flexible, and allow you to inject data into your web applications based on user input, real-time data, and more.

    So, how can we use Handlebars with AWS Lambda? The flexibility of Lambda allows us to define and execute serverless functions based on events such as HTTP requests, user interactions, and more. By building a Handlebars template within a Lambda function, you can dynamically generate HTML pages based on user input or other parameters.


    Here’s an example of how you can use Handlebars with AWS Lambda in a Node.js environment:

    const handlebars = require('handlebars');
    const {promises: {readFile}} = require("fs");
    
    let templateSource;
    
    const renderToString = async (data) => {
        if(!templateSource) templateSource = await readFile('./views/content.hbs', {encoding:'utf8', flag:'r'})
        const template = handlebars.compile(templateSource);
        return template(data);
    }
    
    let response;
    let html;
    const lambdaLoadedDate = new Date().toISOString();
    
    exports.lambdaHandler = async (event, context) => {
        try {
            const now = new Date().toISOString();
            html = await renderToString({title: 'im the title', now: now, lambda_date: lambdaLoadedDate});
            response = {
                statusCode: 200,
                headers: {
                    'Content-Type': 'text/html',
                },
                body: html
            }
        } catch (err) {
            console.log(err);
            return err;
        }
    
        return response
    };

    Handlebars is an easy-to-use templating language that helps you create rich and dynamic websites. With its intuitive syntax and powerful features, it’s a great choice for quickly and easily building powerful web applications.

    Combining Handlebars with AWS Lambda can make your life even easier. AWS Lambda is a serverless compute service that can run code without needing to provision or manage servers. By using Handlebars alongside AWS Lambda, you can quickly create websites or applications without having to write a lot of code.

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
  • Full-text search in Node JS. Search-related data

    January 29th, 2023

    Full-text search in Node.js is a powerful tool for searching for information stored in a database. By leveraging the capabilities of the Node.js platform, developers can easily incorporate full-text search into their applications, allowing users to quickly find relevant data based on keywords or phrases in their queries. This can help reduce the time and effort needed to search through large datasets. Moreover, it can be used to provide search-related data such as analytics, visualization, and more.


    If you are building a website, e-commerce, blog, etc., you will need a full-text search to find related content like Google does for every web page. This is an already known problem so probably you don’t want to implement your own solution.

    One option is to use the FlexSearch module for Node js.

    So let’s create a small Proof of Concept (POC) from scratch.

    The full source code is here

    Have in mind that it’s an in-memory implementation so won’t be possible to index a huge amount of data. You can make your own benchmarks based on your requirements.

    Setting up

    Install Express generator if you haven’t done

    Also, I strongly recommend you install a plugin in your browser to see JSON in a pretty-print format. I use JSONView. Another option is to use Postman to make your HTTP requests.

    mkdir myflexsearch
    cd myflexsearch
    express --no-view --git

    You can delete boilerplate code such as /public folder and routes/routes/users.js. After that, you will have to modify app.js because they are used there. Anyway, that code doesn’t affect our Proof of Concept.

    Let’s install flexsearch module

    npm install flexsearch --save

    Optionally you can install nodemon module to automatically reload your app after every change. You can install it globally but I will locally

    npm install nodemon --save

    After that, open package.json and modify the start

    "scripts": {
        "start": "nodemon ./bin/www"
    }

    Let’s code

    Our main code will be at routes/index.js. This will be our endpoint to expose a service to search like this:

    /search?phrase=Cloud

    Import the module

    const FlexSearch = require("flexsearch");
    const preset = "score";
    const searchIndex = new FlexSearch(preset);

    With preset = “score” we are defining behavior for our search. You can see more presets here. I recommend you play with different presets and see results.

    We’ll need some dummy data to test.

    What I’ve done is to create a file /daos/my_data.js with some content from here: https://api.publicapis.org/entries

    Summary steps

    • Build our index
    • Define a key. Typically and ID field of our elements to index (user.id, book.id, etc)
    • Define a content where we want to search. Example: the body of our blog post plus some description and its category.
    • Expose a service to search through a URL parameter
    • Build our index if it is empty
    • Get the phrase to search from and url parameter
    • Search in our index and get a list of IDs with results
    • With the above results get elements from our indexed collection.
    • Make requests to test our data
    • Building the index
    function buildIndex() {
      console.time('buildIndexTook');
      console.info('building index...');
      const { data } = wsData; // we could get our data from DB, remote web service, etc.
      for (let i = 0; i < data.length; i++) {
        // we might concatenate the fields we want for our content
        const content = `${data[i].API} ${data[i].Description} ${data[i].Category}`;
        const key = parseInt(data[i].id);
        searchIndex.add(key, content);
      }
      console.info(`index built, length: ${searchIndex.length}`);
      console.info(' Open a browser at http://localhost:3000/');
      console.timelineEnd('buildIndexTook');
    }

    Have in mind we are working with an in-memory search so be careful with the amount of data you load to the index.
    This method shouldn’t take more than a couple of seconds running.

    Basically in buildIndex() method we get our data from a static file but we could get it from a remote web service or a database.
    Then we indicate a key for our index and then the content.

    After that our index is ready to receive queries.

    Exposing the service to search

    router.get('/search', async (req, res, next) => {
      try {
        if (searchIndex.length === 0) {
          await buildIndex();
        }
    
        const { phrase } = req.query;
        if (!phrase) {
          throw Error('phrase query parameter empty');
        }
        console.info(`Searching by: ${phrase}`);
        // search using flexsearch. It will return a list of IDs we used as keys during indexing
        const resultIds = await searchIndex.search({
          query: phrase,
          suggest: true, // When suggestion is enabled all results will be filled up (until limit, default 1000) with similar matches ordered by relevance.
        });
    
        console.info(`results: ${resultIds.length}`);
        const results = getDataByIds(resultIds);
        res.json(results);
      } catch (e) {
        next(e);
      }
    });
    

    Here we expose a typical Express endpoint that receives the phrase to search through a query string parameter called phrase.
    The result of our index will be the keys that match with our phrase, after that we will have to search our elements in our dataset to be displayed.

    function getDataByIds(idsList) {
      const result = [];
      const { data } = wsData;
      for (let i = 0; i < data.length; i++) {
        if (idsList.includes(data[i].id)) {
          result.push(data[i]);
        }
      }
      return result;
    }

    We are just iterating our collection but typically we will query a database.


    Making requests

    Our last step is just to make some test requests with our browser, Postman, curl, or any other tool.

    Some examples:

    • http://localhost:3000/search?phrase=Cryptocurrency
    • http://localhost:3000/search?phrase=Cloud
    • http://localhost:3000/search?phrase=File
    • http://localhost:3000/search?phrase=Storage
    • http://localhost:3000/search?phrase=Open%20Threat

    That’s it. See the full source code

    Tip: if you are working with MySQL, you can try its own full-text implementation

    Share this:

    • Click to share on X (Opens in new window) X
    • Click to share on LinkedIn (Opens in new window) LinkedIn
    • Click to share on Reddit (Opens in new window) Reddit
    • Click to email a link to a friend (Opens in new window) Email
    Like Loading…
←Previous Page
1 … 8 9 10 11 12 … 25
Next Page→

  • LinkedIn
  • GitHub
  • WordPress

Privacy PolicyTerms of Use

Website Powered by WordPress.com.

  • Subscribe Subscribed
    • javaniceday.com
    • Already have a WordPress.com account? Log in now.
    • javaniceday.com
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
%d