AWS Lambda functions are a great way to automate certain tasks and processes in the cloud. They can be triggered by events, such as a file upload to an S3 bucket or a message sent to an SNS topic, allowing you to execute some code in response.
In this post, we’ll show you how to write data to an S3 bucket from a Lambda function. This can be useful for a variety of tasks, such as archiving log files or uploading data to a data lake.
S3 (Simple Storage Service) is Amazon’s cloud storage solution that allows you to store and access data from anywhere in the world. Writing to an S3 bucket from a Lambda function is a simple way to store and access data in the cloud.
In this post, I’ll walk you through how to write to an S3 bucket from a Lambda function. We’ll use the AWS SDK for Node.js to access S3.
Take this example as a starting point. This is not a production-ready code, probably some tweaks for permissions will be necessary to meet your requirements.
Salesforce and Amazon’s Web Services (AWS) are two powerful software development and cloud computing tools. In this post, we’ll discuss how these two tools can be integrated for an optimized and efficient workflow.
The integration of Salesforce and AWS allows businesses to take advantage of the scalability, reliability, and security of both platforms. The integration enables businesses to quickly and efficiently move key data and applications between the cloud platforms and reduces the complexity of integration.
There are many ways to sync up our Salesforce data with third parties in realtime. One option is a mix of Salesforce and AWS services, specifically Change Data Capture from Salesforce and AppFlow from AWS. We are going to build a Cloudformation yml file with all that we need to deploy our integration on any AWS environment. However can be a good option to do it first by point and click through the AWS console and then translate it into a Cloudformation template.
If you are using Heroku and Postgres, Heroku Connect is a good option too
About Salesforce Change Data Capture
Receive near-real-time changes of Salesforce records, and synchronize corresponding records in an external data store.
Change Data Capture
publishes change events, which represent changes to Salesforce records. Changes include creation of a new record, updates to an existing record, deletion of a record, and undeletion of a record.
Important:
Change Data Capture does not support relationships at the time this post was written (08/2021). This means you will only be able to sync up beyond your object unless you implement some tricks using Process Builder and Apex. That’s out of the scope of this post and we are going to see it in a different one because requires some extra steps and knowledge.
To start listening on specific object go to Setup -> Integrations -> Change Data Capture. Move the object you want to the right.
Advantages of using AppFlow approach
Data being transferred securely
Credentials are managed by Oauth process
No coding required unless you want to run some specific logic for every sync up
100% serverless, pay as use
Disadvantages of using AppFlow approach
The connection must exist before deploying the infrastructure. This is a manual step
This approach can take some time to learn and configure, specially if you are already familiar with callouts from Salesforce
Requirements for Salesforce
Your Salesforce account must be enabled for API access. API access is enabled by default for the Enterprise, Unlimited, Developer, and Performance editions.
Your Salesforce account must allow you to install connected apps. If this functionality is disabled, contact your Salesforce administrator. After you create a Salesforce connection in Amazon AppFlow, verify that the connected app named Amazon AppFlow Embedded Login App is installed in your Salesforce account.
The refresh token policy for the Amazon AppFlow Embedded Login App must be set to Refresh token is valid until revoked. Otherwise, your flows will fail when your refresh token expires.
You must enable change data capture in Salesforce to use event-driven flow triggers.
If your Salesforce app enforces IP address restrictions, you must grant access to the addresses used by Amazon AppFlow.
To create private connections using AWS PrivateLink, you must enable both Manager Metadata and Manage External Connections user permissions in your Salesforce account. Private connections are currently available in the us-east-1 and us-west-2 AWS Regions.
Architecture for the solution
Let say we want to listen to changes on Account object. Every time a new Account is created or updated there will be an event to AppFlow through Salesforce Data Capture.
We could add some logic in the Lambda function to decide if we are interested in that change or not.
How to create the Salesforce Oauth Connection
As we said, an Oauth connection must exist before deploying our stack to AWS. This is something we have to create by hand. If we deal with different environments in AWS, we can create as many connection as we want pointing to our different Salesforce instances.
Open your AWS console and go to Amazon App Flow
Go to View Flows and click on Connections
Click on Create Connection. Select production in case you have a dev org. Provide a connection name
Once you click on Continue, a Salesforce popup will be open. Put your Salesforce credentials to login
After that your connection will be created and available to use
It’s important we have a way to troubleshoot in case things go wrong. Since this integration deals with different AWS services, we have to see what we have available in each one.
Let say you want to process images in background or any other task that requires heavy processing and you don’t want to tie this operation to any other core operation or service.
Suppose service A deals with users:
Sign in
Sign up
Forgot password
Profile update
etc
Every time a user uploads an image to his profile you want to resize it and generate multiple thumbnails for multiple platforms or devices. You can do this operation under the same umbrella of service A but soon this business logic will grow up making our microservice a larger service.
So, you decide to separate this kind of operations to a new microservice (service B) but how you communicate with each other?
One option is to call the service directly but sounds you are tying bot services. Another option (here we go with this post) is to broadcast an event from service A called “profile updated”.
Now we have to see how service B is notified to start processing the image and here is where SNS and SQS come.
In the following example, I show you have to deploy an SNS topic that writes a message in an SQS queue. After that, you could have a Lambda function that is triggered by this queue but this is out of this scope.
I strongly recommend that you start playing in a brand new project instead of trying to add more stuff to an existing one. Thus, will be easier to narrow down errors
Beware with indentations. Use a code formatted such as Webstorm to auto-format the document. Otherwise, you will get crazy looking at misleading errors