Original text
Rate this translation
Your feedback will be used to help improve Google Translate

Cookie Consent

By clicking “Accept Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info

General Published on: Thu Feb 23 2023

Understanding ChatGPT and Its Implications


OpenAI launched ChatGPT, a chatbot, in November 2022. It is based on the Open AI GPT - 3.5 families of big language models and can be trained using both supervised and reinforcement learning techniques. OpenAI provides support for ChatGPT.

ChatGPT was refined on top of GPT-3.5 utilizing supervised and reinforcement learning. Human trainers were used in both approaches to increase the model's performance. In the case of supervised learning, the model was fed interactions in which the trainers took on the roles of both the user and the AI assistant. In the reinforcement step, human trainers ranked responses generated by the model in a previous conversation.

This appears to be underestimating the possibilities of ChatGPT, a highly fast and obedient junior developer/admin. ChatGPT may produce spectacular results with minimal input, such as Validation Rules, Apex Code, or even a blog post. This post may have been authored by ChatGPT and you'd have no idea. However, there are certain limits, which will be discussed further below.


As we all know, AI is entirely built on Models, and a Model is a program that can detect patterns or make choices from previously unseen data. As a result, the OpenAI API is powered by a family of models with varying capabilities and price ranges. You can also fine-tune our fundamental models to fit your individual use case.

In general, OpenAI provides three bases, each of which contains subcategories of the same type: -

How ChatGPT works

You type a request - a prompt - similar to a Google search request, but using phrases as if you were speaking to someone, ChatGPT generates an answer and returns it in a wonderfully natural, conversational manner. It does not generate new ideas or content because it draws on its massive collection of information gathered from millions of websites.

The outcome is significantly better if you provide a concise and precise prompt. You should also keep in mind that the outcome may not be completely correct. The consequences are that you must understand the problem you are attempting to solve and be able to validate your solution. Consider ChatGPT to be a very capable and efficient assistant or consultant.

If the response is inaccurate, you can change the prompt and it will rectify the answer. It is teaching you how to ask questions correctly, and you are teaching it how to offer better answers.

What are the implications of ChatGPT?

1. For cybersecurity

ChatGPT, according to Check Point Research and others, is capable of producing phishing emails and viruses, especially when paired with OpenAI Codex. The CEO of ChatGPT developer OpenAI, Sam Altman, warned that evolving software might offer "(for example) a tremendous cybersecurity risk" and also continued to anticipate "we could get to actual AGI (artificial general intelligence) in the next decade, so we have to take the danger of that extremely seriously". Altman claimed that, while ChatGPT is "clearly not close to AGI," one should "believe the exponential. Flat looking backward, vertical looking forwards."

2. The end of Salesforce developers

ChatGPT results are textual, not graphical. So, they are perfect for writing code. For example, it can create a Formula, Validation Rule, Apex Class, Lightning Web Component (LWC), or a Unit Test for an LWC. It cannot generate a Flow or other declarative outcome, but it can generate the XML generated by declarative action, such as the creation of an object and fields.

ChatGPT is perfect for a well-defined, narrow scope. When applied to a larger problem, the results are less convincing. No matter how much detail you provide, it will not write the XML required to configure Salesforce for a yoga studio.

3.  Junior developers are at the greatest risk

As junior developers, they may use ChatGPT to improve their coding skills by providing a question and then reviewing the ChatGPT response. The other option is for them to train as a Prompt Engineer.


ChatGPT and other LLMs appear to provide beautiful, accurate findings, but they are far from flawless, as OpenAI CEO Sam Altman remarked in a tweet. As they are building their results millions of websites are not validated. Therefore, ChatGPT and LLMs reinforce social biases often dissing women and people of colour, and making up historical and biographical data, to justify false and dangerous claims. This could also explain why ChatGPT is so good at writing code, given the training data consists of millions of lines of working code.

These issues are less of a concern in the Salesforce ecosystem, as the apparent application of ChatGPT is generating specified content, like as a Formula, rather than soliciting feedback.

However, as previously said, ChatGPT is not a substitute for experience. You can only make the prompt if you know what you want. And you can only use the result if you are confident that it is correct.

As Al and ML are evolving at a rapid rate, those organizations that will embed these technologies in their operations will get first mover advantage that will provide them a competitive edge. To implement new-gen solutions companies need a Data Science Services Company like Hexaview. In our AI & Machine Learning Consulting services, Hexaview applies deep technological expertise to assist you in automating internal processes, delivering personalized customer experiences, and implementing next-generation solutions that will transform your business operations at scale.

Tejasv Pratap Tyagi

Application Engineer(Salesforce)

Tejasv is an Application Engineer with 1 year of experience in the Salesforce Ecosystem. He is also a State Level Basketball player. In his free time, he continually researches new Salesforce-related stuff.