Validation rules are an important tool for ensuring that data is accurate, complete, and consistent. Validation rules are used to check the data in a record before it is saved. Rules can be set to require specific data types, such as dates or numbers, meet the minimum or maximum values, or contain certain characters or text strings. Validation rules can also be used to enforce complex business rules, such as ensuring that a customer’s address matches the one on their credit card. With validation rules, you can ensure that your records always contain the correct information before they are saved.
Let’s see some useful tips to write good error messages.
Give instructions that tell the user the type of entry that’s valid, such as birthday must be before today.
Include the field label to identify the field that failed validation, especially if the error message appears in a common place such as a toast or a popup.
Try to describe the source of the error.
Avoid the “try again” at the end of every error message. Only use it if the error is related to a temporal error such as a network connection and a second try can resolve the problem. Trying again won’t make the problem to be resolved automagically, give some tips to the user to know how to resolve it, even if it is “Please contact the support team”
Make sure the error message is not exposing sensitive data. Sometimes this is a good excuse to just display “internal error, try again”
As a web developer, I have always been on the lookout for ways to simplify the process of deploying and managing my applications. That’s why I recently decided to adopt Heroku as my platform of choice.
Heroku is an incredibly user-friendly platform that allows developers to quickly and easily deploy their applications in minutes. With a few clicks, my applications can be deployed to production. More importantly, with Heroku, I can focus on development rather than having to spend time dealing with tedious server configuration.
This post is built from a developer perspective where the main task we do daily is (obviously) development. So if you are a sysadmin guy, Heroku will like you at the beginning but soon you will have your hands tied since flexibility is not the main feature.
It’s not a complete guide and probably won’t be enough to make a decision but it sure will be useful to start investigating a bit more about this great platform.
What is Heroku?
Heroku is a platform as a service (PaaS) acquired by Salesforce in 2011.
Let’s say it’s a smart hosting for your web apps (*)
(*) Even you can deploy a daemon (a.k.a Heroku worker), I would bet that most of the software deployed on Heroku are web applications.
You developed your web application and then what?
You created a beautiful web application and now you want to publish it under your own domain http://www.myawesomeapp.com. You have a couple of options.
One option is to hire a Virtual Private Server (VPS).
Another option is to hire a dedicated server. Both of them require some sysadmin knowledge because most of the time you get (literally) an OS with enough software to boot.
For sure that has some advantages such as total control of security and performance. The main disadvantage I see is you lose focus on development to pay attention to backups, security, deployments, etc.
Concentrate on development, not on deployment
Even if you build your own blog where you document your experiences, you need a 99% uptime so you will have these main tasks:
Development
Configure the production environment
Deployment
Writing
Backups
Scale. Ok, only if you write lots and interesting articles 🙂
It seems lots of tasks just for a personal blog, right?
Most of those tasks you can delegate to Heroku and concentrate more on development.
Great support
Nice uptime
A large list of Add-ons
A free plan to start learning and to have your testing environment (unless you do stress testing).
Disadvantages
Flexibility. Is not like a VPS where you are able to customize lots of things such as web server configuration and even the OS.
But wait… is not too expensive?
The answer depends on the value you add to your time and headaches.
Since most of the time we don’t have a sysadmin guy on our team, we will have to that work, taking time from our main task: development our cool app.
My experience with Java and Heroku
I’m involved in the Java world for 10+ years and even more if I count years from the university. However, I started to use Heroku a couple of years ago. In the past, I used to configure a server from scratch, install Tomcat, Glassfish, MySql, Iptables, Mail server (very painful), Apache, PHP, JRE, etc. Even it’s hard, it’s also fun to learn
Currently, I’m involved in some projects with Java plus Heroku and It feels very comfortable to do deployments just with one command or click without configuring so much stuff.
Security
If you deal with sensitive data such as Salesforce org data, Heroku offers private spaces that have (among other things) special configuration for those cases.
Hashing is an essential part of encryption and data security. A hash is an algorithm that takes the text of a file, message, or other data and converts it into a unique string of characters. The hash acts as an immutable representation of the text and can be used to verify the integrity of the original data.
In this post, we’ll look at how to hash text in Node.js with the Crypto library.
const crypto = require('crypto');
const stringToHash = 'hello';
const hash = crypto.createHash('sha256').update(stringToHash).digest('hex');
// the output will be something like: 20ccfac05d596dfb08509494fc7753780ac6d79a23c88ef02514f4c5b2a3ff45
// instead of .digest('hex'); you can use also .digest('base64');
GitHub Actions provides a great way to automate workflow processes and make development and deployment more efficient. Linting is an important part of any software development project to ensure code consistency and maintain uniform standards. In this post, I will explain how to setup a Github Actions workflow lint to automate your linting process.
Setting up a GitHub Actions workflow lint is easy and can help keep your codebase consistent and maintainable.
First, you need to create a workflow file in your project’s root directory. This file should be named .github/workflows/lint.yml and should contain the configuration necessary to run your linting script.
We all know that JavaScript Object Notation (JSON) is a powerful and popular data markup language. It is widely used in web development and API design. But what if you wanted to transform a JSON object’s keys and values to lowercase?
Fortunately, there is a simple way to do this. All you need to do is to loop through the object, retrieving each key and value, and then convert it to lowercase using the JavaScript toLowerCase() method.
But what if you do it in a fancy way using reduce?
test("obj to lowercase", () => {
const obj = {
MyKey1: "MY Value 1",
MyKey2: "MY Value 2",
};
const result = objToLowerCase(obj);
expect(result["MyKey1"]).toBeDefined();
expect(result["MyKey1"]).toBe("my value 1");
expect(result["MyKey2"]).toBeDefined();
expect(result["MyKey2"]).toBe("my value 2");
});
AWS Lambda functions are a great way to automate certain tasks and processes in the cloud. They can be triggered by events, such as a file upload to an S3 bucket or a message sent to an SNS topic, allowing you to execute some code in response.
In this post, we’ll show you how to write data to an S3 bucket from a Lambda function. This can be useful for a variety of tasks, such as archiving log files or uploading data to a data lake.
S3 (Simple Storage Service) is Amazon’s cloud storage solution that allows you to store and access data from anywhere in the world. Writing to an S3 bucket from a Lambda function is a simple way to store and access data in the cloud.
In this post, I’ll walk you through how to write to an S3 bucket from a Lambda function. We’ll use the AWS SDK for Node.js to access S3.
Take this example as a starting point. This is not a production-ready code, probably some tweaks for permissions will be necessary to meet your requirements.
Prettier is a great tool for making your Node.js code look clean and consistent. It is an easy way to eliminate some of the frustrations associated with coding, such as redundant and messy code formatting. With a few simple steps, you can set up Prettier and get your code looking beautiful.
Overview
Prettier is a code formatter that supports many languages and can be integrated with most of the editors.
Also, you can integrate it with your CI/CD.
That way nobody will be able to merge to the master branch if the code is not well-formatted.
With Prettier you will be able to define your own rules, however default rules are enough at the beginning.
Your rules will be defined in a file called .prettierrc placed in your project’s root.
Let’s see how to install Prettier and then make some configurations.
Jacoco is a tool used to generate detailed reports on code coverage. It is an essential part of a software developer’s toolkit, since it helps determine what portions of a project’s code were actually tested. Jacoco produces HTML reports that display code coverage information in an easy to read format.
Context
After running the Jacoco report generator we can open it and see the coverage results. There are some results the I wanted to know, for example, how many packages contain uncovered classes and how many contain coverage under a given threshold, in this case, I used 60%. So I build a script to count them and print in the browser’s console. So, step into Jacoco’s report tab and paste this script in your browser console to execute it and see the output
Script
const table = document.getElementById("coveragetable");
const row = table.rows;
let uncoveredPackages = 0;
let columnFound = false; // the instructions coverage
let thresholdCoverage = 60;
let packagesUnderThreshold = 0;
let totalPackages = 0;
let i = 0;
while (!columnFound && i < row[0].cells.length) {
// Getting the text of columnName
const str = row[0].cells[i].innerHTML;
if (str.search("Cov.") != -1) {
if (!columnFound) {
// this is the first column with coverage for instructions the next "Cov." column is the one for branches
columnFound = true;
// iterate every row but the head and foot
for (let j = 1; j < row.length - 1; j++) {
const content = row[j].cells[i].innerHTML;
const cov = parseFloat(content);
totalPackages++;
if (cov === 0) {
uncoveredPackages++;
}
if (cov < thresholdCoverage) {
packagesUnderThreshold++;
}
}
}
}
i++;
}
console.log(`total packages: ${totalPackages}`);
console.log(`uncovered packages: ${uncoveredPackages}`);
console.log(`packages under ${thresholdCoverage}%: ${packagesUnderThreshold}`);
PG-Promise is a popular Node.js library for interacting with PostgreSQL databases. Unfortunately, it does not always work with self-signed SSL certificates generated by Postgres. If you’re seeing errors related to self-signed certificates when using PG-Promise, here is how to fix the issue.
If you are using Node.js with some of these packages: pg-promise or pg probably you are facing this issue.
Error: self signed certificate
To fix this issue you have to use the package with the following recommendations
The issue occurrs when we try to use the module this way:
const pgp = require('pg-promise')();
const db = pgp('postgres://john:pass123@localhost:5432/products');
let ssl = null;
if (process.env.NODE_ENV === 'development') {
ssl = {rejectUnauthorized: false};
}
const config = {
host: 'localhost',
port: 5432,
database: 'my-database-name',
user: 'user-name',
password: 'user-password',
max: 30, // use up to 30 connections
ssl:ssl
};
// Or you can use it this way
const config = {
connectionString: 'postgres://john:pass123@localhost:5432/products',
max: 30,
ssl:ssl
};
const db = pgp(config);
Downloading a file from Salesforce using Java is not as difficult as it may seem. With the right tools and knowledge, you can quickly and easily download any type of file from Salesforce via a Rest API. To get started, you’ll need a few software tools and a few lines of code.
Let say we want to download a file stored in Salesforce. Have in mind we said a File and not an Attachment (the old way to deal with files in Salesforce), so we’ll have to deal with ContentDocument, ContentVersion, ContentDocumentLink, etc.
Agenda:
Salesforce security token
Partner API
Salesforce Rest API
Store a file
Implementation
We are going to use force-partner-api dependency to get a session id after login
If we use Maven, our pom.xml will have this dependency
Let’s have some constants to keep our params in one place.
private static final Logger LOG = Logger.getLogger(Main.class.getName());
private static final String USERNAME = "********";
private static final String ACCESS_TOKEN = "********";
private static final String PASSWORD = "********"+ACCESS_TOKEN;
private static final String INSTANCE_URL = "https://********-dev-ed.my.salesforce.com";
private static final String API_VERSION = "51.0";
private static final String SALESFORCE_FILE_ID = "0684W00000AzWoiQAF"; // ContentVersion id
private static final String LOCAL_FILE_FULL_NAME = ""/tmp/sf-java.png"";
Important: the ACCESS_TOKEN constant represents the Salesforce security token you get from Settings -> Reset My Security Token
When you access Salesforce from an IP address that isn’t trusted for your company, and you use a desktop client or the API, you need a security token to log in. What’s a security token? It’s a case-sensitive alphanumeric code that’s tied to your password. Whenever your password is reset, your security token is also reset.
After you reset your token, you can’t use your old token in API applications and desktop clients.
The method to get the file (download and save it in our local disk)
private static void getFile(String sessionId) throws Exception {
String urlString = INSTANCE_URL+"/services/data/v"+API_VERSION+"/sobjects/ContentVersion/"+SALESFORCE_FILE_ID+"/VersionData";
LOG.info(urlString);
URL url = new URL(urlString);
HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setRequestMethod("GET");
con.setRequestProperty("Content-Type", "application/octet-stream");
con.setRequestProperty("Authorization", "OAuth "+sessionId);
LOG.info("Status "+con.getResponseCode());
LOG.info("Status message "+con.getResponseMessage());
// store the file in our disk
Files.copy(con.getInputStream(), Paths.get(LOCAL_FILE_FULL_NAME) , StandardCopyOption.REPLACE_EXISTING);
}