Many companies use Salesforce to manage their business processes. One of the processes that Salesforce can help you with is tracking opportunities. You can use the platform to keep track of which leads or accounts have at least one closed-won opportunity. Knowing which accounts have won an opportunity can help your company target those companies for future sales or marketing efforts.
SELECT Id, Name,BillingCity, BillingState, BillingCountry, CreatedDate
FROM Account
WHERE Id IN (SELECT AccountId FROM Opportunity WHERE IsWon = true)
ORDER BY Name
LIMIT 10
Queueable interface allows apex developers to run complex, lengthy, and asynchronous processes that normally couldn’t be run in synchronous invocations. Queueables are simple to implement and provide several advantages over the old @future annotation, making them a great way to make sure your apex code runs reliably and efficiently.
Async processing is an efficient way to run our algorithms. This mostly involves queues to control how many executions are being processed at a time. Different languages and platforms implement async processing in different ways but the idea is the same. One of the ways (yes, there are more than one) in Salesforce to execute some algorithms that require heavy processing, is the interface Queueable. Another example is the annotation @future but we will put our focus on implementing the interface Queueable in Apex classes.
One of the big challenges for most of the async processing approaches is the absence of order on the execution. This is kind of “Do it when possible”.
In this example we are just inserting a new account. You may think that’s not a heavy operation but what if your org is full of Flows (Process Builder) or triggers that are being executed every time an account is inserted? CPU limit errors will appear soon unless you stay away of the main thread execution.
By using queueable, you will stay away from limits, especially CPU limits.
public class MyQueueable implements Queueable{
private final String myAttribute;
public MyQueueable(String myAttribute){
this.myAttribute = myAttribute;
}
public void execute(QueueableContext context) {
System.debug('executing with: '+myAttribute);
// do some heavy work
Account a = new Account(Name=myAttribute);
insert a;
// enqueue another job if you wish
}
public static void enqueueJob(){
ID jobID = System.enqueueJob(new MyQueueable('my test param'));
System.debug('job id: '+jobID);
}
}
How to enqueue our job
ID jobID = System.enqueueJob(new MyQueueable('my test param'));
Or you can create a method to use it as a shortcut
MyQueueable.enqueueJob();
How to monitor Apex Jobs
You can monitor Apex Jobs from Setup -> Environments -> Jobs -> Apex Jobs. Also you can query your job in case you have to something we them from your code or you just like monitor this way.
SELECT Status,NumberOfErrors, ExtendedStatus
FROM AsyncApexJob
ORDER BY CreatedDate DESC
How to write unit tests for Apex Jobs
@isTest
public class MyQueueableTest {
@isTest
static void myTest() {
String param = 'tes param '+Math.random();
// startTest/stopTest block to force async processes to run in the test.
Test.startTest();
System.enqueueJob(new MyQueueable(param));
Test.stopTest();
// Validate that the job has run by verifying that the record was created.
Account acct = [SELECT Name FROM Account WHERE Name = :param LIMIT 1];
System.assertEquals(param, acct.Name);
}
}
Reading Time: < 1minutesSurfshark VPN is a great choice for those looking for an easy and secure way to access the internet. Its intuitive user interface and wide array of features make it an excellent option for people of all tech levels. With Surfshark VPN, you can enjoy the benefits of strong 256-bit encryption and unlock restricted content from all around the globe. Surfshark VPN also offers a feature-rich app for iOS and Android, so you can stay safe and secure from anywhere.
Surfshark VPN is a virtual private network (VPN) service that provides online privacy and security by encrypting internet traffic and routing it through a secure server. The service allows users to access the internet securely and privately from anywhere in the world, hiding their location and identity.
With Surfshark VPN, users can bypass censorship and geo-restrictions, protect their online activities from being monitored or intercepted, and access restricted websites and content. The VPN service also provides features such as a kill switch, which disconnects the internet connection if the VPN connection drops, and multi-hop VPN, which routes the traffic through multiple servers for additional security.
Surfshark VPN is available for various platforms, including Windows, Mac, iOS, Android, and Linux, and can be used on unlimited devices with a single subscription. The service provides a user-friendly interface, 24/7 customer support, and a 30-day money-back guarantee.
In summary, Surfshark VPN is a secure and private VPN service that provides online protection and access to restricted content by encrypting internet traffic and routing it through a secure server.
These days, image lazy loading is an important technique to improve performance on web applications. Image lazy loading simply loads images asynchronously, so that they appear on the page only when the user scrolls close to them.
Natively supporting lazy loading for images makes the user experience much better and also helps optimize performance. Let’s see how to implement image lazy loading natively with just one attribute from the image HTML tag.
In Salesforce, developers can invoke Apex methods from within the Flow Builder to execute code logic. Apex methods provide the capability of customizing Flows as needed, allowing users to create new features, validate data, and access records or web services. This post will walk you through how to invoke an Apex method from the Flow Builder.
What about Flow Builder
Flow Builder is the replacement for Process Builder in Salesforce. Both of them are useful to automate several kinds of processes but sometimes the out-of-the-box functionality is not enough and we have to go to a custom one such as some lines in an Apex class. Let’s see then how to invoke an Apex method through the Flow Builder.
Salesforce setup – Search Flow Builder
Create Apex class with Invocable method
public class MyFlowClass {
@InvocableMethod(label='My flow method'
description='A cool description about this method'
category='Account')
public static void execute(List<id> accountIds){
System.debug('account ids: '+accountIds);
}
}
</id>
Create the Flow
New Flow – Select typeConfigure FlowConfigure FlowConfigure FlowConfigure Flow
Remember to activate the Flow!
Test the Flow
Our flow will be executed every time we create or modify any account so let’s do that.
First of all open the Developer Console and go to the Logs tab. Open an account, edit any field and save it. After that see the log generated and make sure the debug line is present.
Promises are a great way to improve your codebase by preventing callback hell and making asynchronous code look more readable. Working with files in Node.js can be done with callbacks if you’d like, but promises can make it easier. In this post, we’ll go over how to read and write files with promises in Node.js.
Node-cache is an easy to use module that provides an in-memory caching system for Node.js applications. It helps speed up query operations by caching the results of often used queries, making it faster for subsequent queries.
Using node-cache is easy. To get started, install it with npm.
npm install node-cache
Database access is usually expensive so you should optimize roundtrips as much as possible.
You can avoid queries by caching at multiple levels:
Using a CDN
Caching at browser level
Another option it to cache right before running queries. It can be easy as “ok, I’m about to run this query for these parameters, did I execute it recently?”
How to use node-cache module to cache queries
We are going to build a small piece of code to show how to use node-cache module.
It will be used to cache a SQL query result but it can be used to cache any other result.
const { Pool } = require('pg');
const NodeCache = require('node-cache');
const crypto = require('crypto');
const log4js = require('log4js');
const queryCache = new NodeCache();
const logger = log4js.getLogger('db_helper');
logger.level = 'info';
let rejectUnauthorized = false;
if (process.env.NODE_ENV === 'development') {
rejectUnauthorized = false;
}
// more options: https://node-postgres.com/api/client
const timeout = process.env.DB_TIMEOUT || 1000 * 10;
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
statement_timeout: timeout,
query_timeout: timeout,
connectionTimeoutMillis: timeout,
ssl: {
rejectUnauthorized,
},
});
/**
*
* @param {stirng} theQuery
* @param {[]]} bindings
* @param {boolean} withCache true to cache the result
* @return {Promise<*>}
*/
module.exports.query = async function (theQuery, bindings = [], withCache = false) {
if (withCache) {
logger.info(`executing query with cache ${theQuery}`);
const stringToHash = `${theQuery}${JSON.stringify(bindings)}`;
logger.info(`string to hash: ${stringToHash}`);
const hash = crypto.createHash('sha256').update(stringToHash).digest('hex');
logger.info(`hash: ${hash}`);
const value = queryCache.get(hash);
if (value === undefined) {
try {
logger.info('no cache for this query, let go to the DB');
const queryResult = await pool.query(theQuery, bindings);
queryCache.set(hash, queryResult);
logger.info(`cache set for ${hash}`);
return queryResult;
} catch (error) {
throw new Error(`Error executing query with cache ${theQuery} error: ${error}`);
}
} else {
logger.info(`returning query result from cache ${theQuery}`);
log.info(queryCache.getStats());
return value;
}
} else {
try {
logger.info(`executing query without cache ${theQuery}`);
const result = await pool.query(theQuery, bindings);
// delete all the cache content if we are inserting or updating data
const auxQuery = theQuery.trim().toLowerCase();
if (auxQuery.startsWith('insert') || auxQuery.startsWith('update') || auxQuery.startsWith('delete')) {
queryCache.flushAll();
queryCache.flushStats();
logger.info(`the cache was flushed because of the query ${theQuery}`);
}
return result;
} catch (error) {
throw new Error(`Error executing query without cache ${theQuery} error: ${error}`);
}
}
};
module.exports.execute = pool;
Salesforce and Amazon’s Web Services (AWS) are two powerful software development and cloud computing tools. In this post, we’ll discuss how these two tools can be integrated for an optimized and efficient workflow.
The integration of Salesforce and AWS allows businesses to take advantage of the scalability, reliability, and security of both platforms. The integration enables businesses to quickly and efficiently move key data and applications between the cloud platforms and reduces the complexity of integration.
There are many ways to sync up our Salesforce data with third parties in realtime. One option is a mix of Salesforce and AWS services, specifically Change Data Capture from Salesforce and AppFlow from AWS. We are going to build a Cloudformation yml file with all that we need to deploy our integration on any AWS environment. However can be a good option to do it first by point and click through the AWS console and then translate it into a Cloudformation template.
If you are using Heroku and Postgres, Heroku Connect is a good option too
About Salesforce Change Data Capture
Receive near-real-time changes of Salesforce records, and synchronize corresponding records in an external data store.
Change Data Capture
publishes change events, which represent changes to Salesforce records. Changes include creation of a new record, updates to an existing record, deletion of a record, and undeletion of a record.
Important:
Change Data Capture does not support relationships at the time this post was written (08/2021). This means you will only be able to sync up beyond your object unless you implement some tricks using Process Builder and Apex. That’s out of the scope of this post and we are going to see it in a different one because requires some extra steps and knowledge.
To start listening on specific object go to Setup -> Integrations -> Change Data Capture. Move the object you want to the right.
Advantages of using AppFlow approach
Data being transferred securely
Credentials are managed by Oauth process
No coding required unless you want to run some specific logic for every sync up
100% serverless, pay as use
Disadvantages of using AppFlow approach
The connection must exist before deploying the infrastructure. This is a manual step
This approach can take some time to learn and configure, specially if you are already familiar with callouts from Salesforce
Requirements for Salesforce
Your Salesforce account must be enabled for API access. API access is enabled by default for the Enterprise, Unlimited, Developer, and Performance editions.
Your Salesforce account must allow you to install connected apps. If this functionality is disabled, contact your Salesforce administrator. After you create a Salesforce connection in Amazon AppFlow, verify that the connected app named Amazon AppFlow Embedded Login App is installed in your Salesforce account.
The refresh token policy for the Amazon AppFlow Embedded Login App must be set to Refresh token is valid until revoked. Otherwise, your flows will fail when your refresh token expires.
You must enable change data capture in Salesforce to use event-driven flow triggers.
If your Salesforce app enforces IP address restrictions, you must grant access to the addresses used by Amazon AppFlow.
To create private connections using AWS PrivateLink, you must enable both Manager Metadata and Manage External Connections user permissions in your Salesforce account. Private connections are currently available in the us-east-1 and us-west-2 AWS Regions.
Architecture for the solution
Let say we want to listen to changes on Account object. Every time a new Account is created or updated there will be an event to AppFlow through Salesforce Data Capture.
We could add some logic in the Lambda function to decide if we are interested in that change or not.
How to create the Salesforce Oauth Connection
As we said, an Oauth connection must exist before deploying our stack to AWS. This is something we have to create by hand. If we deal with different environments in AWS, we can create as many connection as we want pointing to our different Salesforce instances.
Open your AWS console and go to Amazon App Flow
Go to View Flows and click on Connections
Click on Create Connection. Select production in case you have a dev org. Provide a connection name
Once you click on Continue, a Salesforce popup will be open. Put your Salesforce credentials to login
After that your connection will be created and available to use
It’s important we have a way to troubleshoot in case things go wrong. Since this integration deals with different AWS services, we have to see what we have available in each one.
The AWS SDK is an incredibly powerful tool for managing resources in the cloud. But what if you want to use the SDK without having to deploy your code to AWS?
Mocking the SDK with Jest and TypeScript is a great way to quickly and easily test code that interacts with AWS services. By mocking the SDK, you can simulate requests and responses to ensure that everything works as expected.
This tutorial will walk you through how to create a mock of the AWS SDK and how to use it with Jest and TypeScript
When writing automated tests for applications that use Amazon Web Services (AWS) APIs, it can be difficult to test code that interacts with the AWS SDK. In order to make testing easier, you can use Jest in combination with TypeScript to mock the AWS SDK and simulate responses from the AWS service. This makes it possible to create reliable tests for your application without actually calling AWS APIs.
Advantages of mocking the AWS SDK
Avoid hitting AWS services will make you save so much money, especially if you have thousands of tests. Even if you use Localstack, your project’s configuration from checking out the code until you run it will take more time.
You tests will run faster
Disadvantages of mocking the AWS SDK
Slow down development process
Mocks for almost every scenario, otherwis,e it will call the real code (what we want to avoid)
Let’s see an example of how to mock the AWS SDK with Jest and TypeScript
The following code mocks two methods of SQS: receiveMessage and deleteMessage. If your code uses more methods of the AWS SDK you will have to mock all of them. Otherwise, your tests will call the real code.
Redis, one of the most popular open-source in-memory databases, provides a simple and straightforward way to delete all keys with a given prefix. This can be done with the help of the **KEYS** and **DEL** commands. First, use **KEYS** to find all keys with a given prefix. This command will search all keys within the given database and return all matching keys in an array. Next, use **DEL** to delete all these keys. For example, if you wanted to delete all
Delete all the keys by a given prefix in a Redis cluster.
Let’s ping our Redis instance
redis-cli -h myhost.com -p 6379 ping
Set some keys
redis-cli -h myhost.com -p 6379 SET dev1 "val1"
redis-cli -h myhost.com -p 6379 SET dev2 "val2"
redis-cli -h myhost.com -p 6379 SET dev3 "val3"
Get one key
redis-cli -h myhost.com -p 6379 KEYS dev1
Delete one key
redis-cli -h myhost.com -p 6379 DEL dev1
Now let’s go with our massive deletion algorithm but before making any deletion let’s test the algorithm without making changes.
for key in `echo 'KEYS dev*' | redis-cli -c -h myhost.com -p 6379 | awk '{print $1}'`
do echo KEYS $key
done | redis-cli -c -h myhost.com -p 6379
And then when you are sure, go ahead with the deletion
for key in `echo 'KEYS dev*' | redis-cli -c -h myhost.com -p 6379 | awk '{print $1}'`
do echo DEL $key
done | redis-cli -c -h myhost.com -p 6379
In case you are not using a cluster just remove the -c options from redis-cli