Handlebars is a popular templating library that can be used to generate dynamic HTML pages. You can use Handlebars in an AWS Lambda function by following these steps:
Create an AWS Lambda function: To start, you need to create an AWS Lambda function in which you will write your code.
Install Handlebars: To use Handlebars in your Lambda function, you need to install it as a dependency. You can do this by adding handlebars to the dependencies section of your project’s package.json file and running npm install.
Load the Handlebars library: In your Lambda function code, you need to load the Handlebars library. You can do this by including the following code:
const handlebars = require('handlebars');
Compile a Handlebars template: To use Handlebars, you first need to compile a Handlebars template that specifies the structure of your HTML page. You can do this by using the following code:
Render the template: To render the template, you need to pass in an object that provides the values for the template’s variables. You can do this by using the following code:
const html = template({ name: 'AWS Lambda' });
Return the generated HTML: Finally, you need to return the generated HTML from your Lambda function. You can do this by including the following code:
Deploy the function: After you have written your code, you need to deploy your Lambda function to AWS. You can do this using the AWS CLI or the AWS Management Console.
These are the basic steps for using Handlebars in an AWS Lambda function. You can customize this code to meet the specific needs of your application.
AWS Lambda functions are a great way to automate certain tasks and processes in the cloud. They can be triggered by events, such as a file upload to an S3 bucket or a message sent to an SNS topic, allowing you to execute some code in response.
In this post, we’ll show you how to write data to an S3 bucket from a Lambda function. This can be useful for a variety of tasks, such as archiving log files or uploading data to a data lake.
S3 (Simple Storage Service) is Amazon’s cloud storage solution that allows you to store and access data from anywhere in the world. Writing to an S3 bucket from a Lambda function is a simple way to store and access data in the cloud.
In this post, I’ll walk you through how to write to an S3 bucket from a Lambda function. We’ll use the AWS SDK for Node.js to access S3.
Take this example as a starting point. This is not a production-ready code, probably some tweaks for permissions will be necessary to meet your requirements.
Salesforce and Amazon’s Web Services (AWS) are two powerful software development and cloud computing tools. In this post, we’ll discuss how these two tools can be integrated for an optimized and efficient workflow.
The integration of Salesforce and AWS allows businesses to take advantage of the scalability, reliability, and security of both platforms. The integration enables businesses to quickly and efficiently move key data and applications between the cloud platforms and reduces the complexity of integration.
There are many ways to sync up our Salesforce data with third parties in realtime. One option is a mix of Salesforce and AWS services, specifically Change Data Capture from Salesforce and AppFlow from AWS. We are going to build a Cloudformation yml file with all that we need to deploy our integration on any AWS environment. However can be a good option to do it first by point and click through the AWS console and then translate it into a Cloudformation template.
If you are using Heroku and Postgres, Heroku Connect is a good option too
About Salesforce Change Data Capture
Receive near-real-time changes of Salesforce records, and synchronize corresponding records in an external data store.
Change Data Capture
publishes change events, which represent changes to Salesforce records. Changes include creation of a new record, updates to an existing record, deletion of a record, and undeletion of a record.
Important:
Change Data Capture does not support relationships at the time this post was written (08/2021). This means you will only be able to sync up beyond your object unless you implement some tricks using Process Builder and Apex. That’s out of the scope of this post and we are going to see it in a different one because requires some extra steps and knowledge.
To start listening on specific object go to Setup -> Integrations -> Change Data Capture. Move the object you want to the right.
Advantages of using AppFlow approach
Data being transferred securely
Credentials are managed by Oauth process
No coding required unless you want to run some specific logic for every sync up
100% serverless, pay as use
Disadvantages of using AppFlow approach
The connection must exist before deploying the infrastructure. This is a manual step
This approach can take some time to learn and configure, specially if you are already familiar with callouts from Salesforce
Requirements for Salesforce
Your Salesforce account must be enabled for API access. API access is enabled by default for the Enterprise, Unlimited, Developer, and Performance editions.
Your Salesforce account must allow you to install connected apps. If this functionality is disabled, contact your Salesforce administrator. After you create a Salesforce connection in Amazon AppFlow, verify that the connected app named Amazon AppFlow Embedded Login App is installed in your Salesforce account.
The refresh token policy for the Amazon AppFlow Embedded Login App must be set to Refresh token is valid until revoked. Otherwise, your flows will fail when your refresh token expires.
You must enable change data capture in Salesforce to use event-driven flow triggers.
If your Salesforce app enforces IP address restrictions, you must grant access to the addresses used by Amazon AppFlow.
To create private connections using AWS PrivateLink, you must enable both Manager Metadata and Manage External Connections user permissions in your Salesforce account. Private connections are currently available in the us-east-1 and us-west-2 AWS Regions.
Architecture for the solution
Let say we want to listen to changes on Account object. Every time a new Account is created or updated there will be an event to AppFlow through Salesforce Data Capture.
We could add some logic in the Lambda function to decide if we are interested in that change or not.
How to create the Salesforce Oauth Connection
As we said, an Oauth connection must exist before deploying our stack to AWS. This is something we have to create by hand. If we deal with different environments in AWS, we can create as many connection as we want pointing to our different Salesforce instances.
Open your AWS console and go to Amazon App Flow
Go to View Flows and click on Connections
Click on Create Connection. Select production in case you have a dev org. Provide a connection name
Once you click on Continue, a Salesforce popup will be open. Put your Salesforce credentials to login
After that your connection will be created and available to use
It’s important we have a way to troubleshoot in case things go wrong. Since this integration deals with different AWS services, we have to see what we have available in each one.
Handlebars is an easy-to-use templating language that helps you create rich and dynamic websites. With its intuitive syntax and powerful features, it’s a great choice for quickly and easily building powerful web applications.
Combining Handlebars with AWS Lambda can make your life even easier. AWS Lambda is a serverless compute service that can run code without needing to provision or manage servers. By using Handlebars alongside AWS Lambda, you can quickly create websites or applications without having to write a lot of code.
const handlebars = require('handlebars');
const {promises: {readFile}} = require("fs");
let templateSource;
const renderToString = async (data) => {
if(!templateSource) templateSource = await readFile('./views/content.hbs', {encoding:'utf8', flag:'r'})
const template = handlebars.compile(templateSource);
return template(data);
}
let response;
let html;
const lambdaLoadedDate = new Date().toISOString();
exports.lambdaHandler = async (event, context) => {
try {
const now = new Date().toISOString();
html = await renderToString({title: 'im the title', now: now, lambda_date: lambdaLoadedDate});
response = {
statusCode: 200,
headers: {
'Content-Type': 'text/html',
},
body: html
}
} catch (err) {
console.log(err);
return err;
}
return response
};
S3 is a powerful object storage service that is offered by Amazon Web Services (AWS). It is popularly used for storing data, and specifically it can also be used as an effective way to store large files. Node.js makes it possible to upload large files to S3 with ease, and in this post, we’ll take a look at how you can do it.
Uploading large files to Amazon S3 is a common use case that many developers need to solve. If you’re using Node.js, the AWS SDK offers an easy way to upload files to S3 using streams. Streams enable you to read and write large amounts of data without having to store it all in memory.
Sometimes you need to upload a big file, let’s say larger than 100MB. Streaming from disk must be the approach to avoid loading the entire file into memory.
To get started, you’ll first need to install the AWS SDK and configure credentials. Once that’s done, you can use the createReadStream function to read a file into a stream and the S3 createMultipartUpload function
AWS API provides methods to upload a big file in parts (chunks).
The main steps are:
Let the API know that we are going to upload a file in chunks
Stream the file from disk and upload each chunk
Let the API know all the chunks were uploaded
/**
*
* @param {string} fileName the name in S3
* @param {string} filePath the absolute path to our local file
* @return the final file name in S3
*/
async function uploadToS3(fileName, filePath) {
if (!fileName) {
throw new Error('the fileName is empty');
}
if (!filePath) {
throw new Error('the file absolute path is empty');
}
const fileNameInS3 = `/some/sub/folder/${fileName}`; // the relative path inside the bucket
console.info(`file name: ${fileNameInS3} file path: ${filePath}`);
if (!fs.existsSync(filePath)) {
throw new Error(`file does not exist: ${filePath}`);
}
const bucket = 'my-bucket';
const s3 = new AWS.S3();
const statsFile = fs.statSync(filePath);
console.info(`file size: ${Math.round(statsFile.size / 1024 / 1024)}MB`);
// Each part must be at least 5 MB in size, except the last part.
let uploadId;
try {
const params = {
Bucket: bucket,
Key: fileNameInS3,
};
const result = await s3.createMultipartUpload(params).promise();
uploadId = result.UploadId;
console.info(`csv ${fileNameInS3} multipart created with upload id: ${uploadId}`);
} catch (e) {
throw new Error(`Error creating S3 multipart. ${e.message}`);
}
const chunkSize = 10 * 1024 * 1024; // 10MB
const readStream = fs.createReadStream(filePath); // you can use a second parameter here with this option to read with a bigger chunk size than 64 KB: { highWaterMark: chunkSize }
// read the file to upload using streams and upload part by part to S3
const uploadPartsPromise = new Promise((resolve, reject) => {
const multipartMap = { Parts: [] };
let partNumber = 1;
let chunkAccumulator = null;
readStream.on('error', (err) => {
reject(err);
});
readStream.on('data', (chunk) => {
// it reads in chunks of 64KB. We accumulate them up to 10MB and then we send to S3
if (chunkAccumulator === null) {
chunkAccumulator = chunk;
} else {
chunkAccumulator = Buffer.concat([chunkAccumulator, chunk]);
}
if (chunkAccumulator.length > chunkSize) {
// pause the stream to upload this chunk to S3
readStream.pause();
const chunkMB = chunkAccumulator.length / 1024 / 1024;
const params = {
Bucket: bucket,
Key: fileNameInS3,
PartNumber: partNumber,
UploadId: uploadId,
Body: chunkAccumulator,
ContentLength: chunkAccumulator.length,
};
s3.uploadPart(params).promise()
.then((result) => {
console.info(`Data uploaded. Entity tag: ${result.ETag} Part: ${params.PartNumber} Size: ${chunkMB}`);
multipartMap.Parts.push({ ETag: result.ETag, PartNumber: params.PartNumber });
partNumber++;
chunkAccumulator = null;
// resume to read the next chunk
readStream.resume();
}).catch((err) => {
console.error(`error uploading the chunk to S3 ${err.message}`);
reject(err);
});
}
});
readStream.on('end', () => {
console.info('End of the stream');
});
readStream.on('close', () => {
console.info('Close stream');
if (chunkAccumulator) {
const chunkMB = chunkAccumulator.length / 1024 / 1024;
// upload the last chunk
const params = {
Bucket: bucket,
Key: fileNameInS3,
PartNumber: partNumber,
UploadId: uploadId,
Body: chunkAccumulator,
ContentLength: chunkAccumulator.length,
};
s3.uploadPart(params).promise()
.then((result) => {
console.info(`Last Data uploaded. Entity tag: ${result.ETag} Part: ${params.PartNumber} Size: ${chunkMB}`);
multipartMap.Parts.push({ ETag: result.ETag, PartNumber: params.PartNumber });
chunkAccumulator = null;
resolve(multipartMap);
}).catch((err) => {
console.error(`error uploading the last csv chunk to S3 ${err.message}`);
reject(err);
});
}
});
});
const multipartMap = await uploadPartsPromise;
console.info(`All parts have been upload. Let's complete the multipart upload. Parts: ${multipartMap.Parts.length} `);
// gather all parts' tags and complete the upload
try {
const params = {
Bucket: bucket,
Key: fileNameInS3,
MultipartUpload: multipartMap,
UploadId: uploadId,
};
const result = await s3.completeMultipartUpload(params).promise();
console.info(`Upload multipart completed. Location: ${result.Location} Entity tag: ${result.ETag}`);
} catch (e) {
throw new Error(`Error completing S3 multipart. ${e.message}`);
}
return fileNameInS3;
}
AWS AppFlow is a fully managed cloud integration solution for moving data between SaaS applications. It enables customers to model and configure flows to synchronize data between applications with just a few clicks. When setting up a flow, customers may encounter an error that says “conflict executing request connector profile is associated with one or more flows”.
This error occurs when a connector profile used in a flow is associated to one or more other flows either in the same or another account. While the dependability of the flow connector
Let’s say you have a Salesforce connector (it is valid for any other available) and the token expired. The only way so far is to delete and recreate the connection again. Would be nice to keep the same connection and run the handshake again but it is impossible nowadays.
If we try to delete a connector from the AWS console and it is associated with one or many flows, it will display this error:
Conflict executing request: Connector profile: xxxxxxx is associated with one or more flows. If you still want to delete it, then make delete request with forceDelete flag as true. Some of the associated flows are: [xxxxx, xxxxxx]
The trick is to delete the connector from AWS CLI thus:
If you’re an AWS user, you’ve likely encountered the dreaded “AWS suspended error” in AppFlow at some point. This error can result from various issues, but if you are using a Salesforce connector, it can be due to the Salesforce daily limit being exceeded. So you consumed all your daily API requests for the day.
You’re not alone if you’ve experienced this error. Many AWS users encounter this message while trying to retrieve data or update resources. So, what can you do if you run into this error?
AWS AppFlow suspended status error is sown for many reasons. You have to click on the Suspended word in the AWS console to see the extended error. Another option is to see it through the CLI
The error
This is an example error you could get:
The flow has been suspended due to an error in Salesforce when subscribing to the event. Here's the detailed error message: The request failed because the service Salesforce returned the following error: Details: Subscribing to topic /data/xxxxxxxx__ChangeEvent with replayId -1 failed due to com.amazon.sandstonebridge.connector.exception.ClientSubscriptionException: Cannot subscribe to topic /data/xxxxxxxxx__ChangeEvent, replay from -1, with error 403::Organization total events daily limit exceeded (Service: null; Status Code: 400; Error Code: Client; Request ID: xxxxxxxxxx; Proxy: null).
And this is the main reason:
Organization total events daily limit exceeded
How to fix it the Salesforce daily limit exceeded error
Option 1
Just wait a couple of hours and reactivate the flow again. The main reason is Organization total events daily limit exceeded which means we reached our Salesforce daily limit for events
Option 2
Another option is to increase that limit (through Salesforce support) but that will depend on many reasons. However, you should find the root cause, especially if that limit exceeded doesn’t make sense with your Salesforce implementation.
Conflict executing request: Connector profile: xxxxxx is associated with one or more flows. If you still want to delete it, then make delete request with forceDelete flag as true. Some of the associated flows are: [flow1, flow2,…]
We could see this error when deploying a stack in AWS that contains an App Flow block. If we are doing through Cloudformation it’s possible we have added a new mapping to a field that doesn’t exist in Salesforce.
Let see an example of error
In this case, Field1__c and Field2__c are being mapped in App Flow but those do not exist in Salesforce or at least App Flow doesn’t have permissions to access them.
Resource handler returned message: “Invalid request provided: AWS::AppFlow::FlowCreate Flow
request failed:
[
Task Validation Error: The following connector fields are not
supported: [Field1__c, Field2__c]
The task sourceConnectorType is FILTERING and the task operator is PROJECTION,
Task Validation Error:
The following connector fields are not supported: [Field1__c]
The task sourceConnectorType is MAPPING and the task operator is NO_OP,
Task Validation Error: The following connector fields are not supported:
[Field2__c] The task sourceConnectorType is MAPPING and the task operator is NO_OP
]
(Service: Appflow, Status Code: 400, Request ID: xxxxxx-xxxx-xxxxx-xxxxx-xxxxxxx,
Extended Request ID: null)" (RequestToken: xxxxxx-xxxxxx-xxxxxx-xxxxxx-xxxxxxxx,
HandlerErrorCode: InvalidRequest)
Verification checklist
Make sure the field exists
Make sure you don’t have a typo in the field name
Make sure the user used for the OAuth connection has necessary permissions. You could check Field-Level security or an existing permission set
Try to recreate the connection and deactivate/activate the App Flow
Using AppFlow, you can easily move data between Salesforce and other cloud-based systems such as Microsoft Dynamics, Oracle, or SAP. With AppFlow, you can automate the movement of data between these systems, which in turn means you can save time and energy.
One object at a time
At the time of writing this post, it’s not possible to sync up relationships from Salesforce to AppFlow. Only basic types such as strings or numbers can be passed through AppFlow which means that look-up fields will be ignored. There’s a way you can do it with some work from using the Salesforce out-of-the-box Process Builder or Flows tool, a custom object and some lines of Apex code.
The trick
We will need these three things to send out our related fields through AppFlow
A custom object
An Apex class
Process Builder
The custom object
This custom object will hold all three attributes to be synchronized. It will be our DTO(Data Transfer Object)
The Apex class
It will gather all the information we need and instantiate our DTO. Once inserted, Change Data Capture will take over the control to send our data to AppFlow
The flow built using Process Builder or Flows
It will call our Apex class under the conditions we want. It can be a “just execute every time I save an Account ” or with more complex rules