There are so many posts and blogs on ChatGPT these days! But it is still not known to everyone yet. I recently met some people at an event who were completely unaware of what it is. Based on what people were saying, some people thought it might be similar to IBM's Watson, whereas some people thought it is similar to Alexa.
As ChatGPT is open for trying, people are interacting with it, getting amazed and posting their conversations on social media. There are supposedly API-based applications that leverage the power of ChatGPT for a variety of use cases. I was quite curious to know what it can and can not do. So I decided to ask ChatGPT about its capabilities and how we could perhaps use it best. This was the response:
ChatGPT is a large language model that can be used for a wide variety of tasks, including:
Text Generation: ChatGPT can be used to generate human-like text for tasks such as writing essays, articles, or even poetry.
Text Completion: ChatGPT can complete a given sentence or paragraph by suggesting the next word or phrase.
Dialogue Generation: ChatGPT can be used to generate responses in a conversation, for example for chatbot or virtual assistant applications.
Language Translation: ChatGPT can be used to translate text from one language to another, although it's important to note that the quality of the translation may vary.
Text Summarization: ChatGPT can be used to generate a summary of a given text, which could be useful for tasks such as news summarization.
Question Answering: ChatGPT can be used to answer questions based on a given context, it could be used in a conversational agent or a search engine.
In order to use ChatGPT effectively, it's important to understand its capabilities and limitations. The model's responses will be based on the information it has been trained on, and its accuracy may vary depending on the task and the input provided.
It's also important to note that the model is a machine and it's not able to understand context and meaning in the same way that humans do, so it's important to be specific and clear when asking questions or providing input.
Hmmm.. now that is helpful. So, it is pointless to try it for mathematical, analytical and logical questions.
Text Generation Capabilities
I found it pretty good at Text Generation. For instance, I asked it to explain the technical architecture for implementing a blog on AWS. This was the response:
The following is an example of a possible architecture:
Content Management System: A Content Management System (CMS) such as WordPress or Drupal can be installed on an Amazon Elastic Compute Cloud (EC2) instance to manage the blog's content.
Database: A database such as Amazon Relational Database Service (RDS) can be used to store the blog's content, such as posts, comments, and user information.
Web Server: An Amazon Elastic Load Balancer (ELB) can be used to distribute incoming traffic to the EC2 instances running the blog's web server software such as Apache or Nginx.
Content Delivery: Amazon CloudFront can be used to distribute the blog's content to users globally, improving the website's performance by caching the content at edge locations.
Storage: Amazon Simple Storage Service (S3) can be used to store the blog's media files, such as images and videos.
Security: Amazon Virtual Private Cloud (VPC) and AWS Identity and Access Management (IAM) can be used to secure the blog's infrastructure and data.
Backup and Recovery: Amazon S3 can be used to store backups of the blog's content, and Amazon RDS snapshots can be used to back up the database.
Monitoring and Logging: Amazon CloudWatch can be used to monitor the blog's infrastructure and application, and Amazon S3 can be used to store logs for analysis and troubleshooting.
This is just an example of one possible architecture, and the specific technologies and services used may vary depending on the specific requirements of the blog.
It has sourced this information from the input source data that was used to train the LLM. It acknowledges that it is only one possible solution (may not be the best one, or the perfect one). This is quite close to having a conversation with a human being who knows the topic and provides an initial response to such a question.
I continued to try it on various topics such as using it for language translation, where the output was not impressive (as of today).
Proceeding with caution - AI/ML bias
Bias in AI/ML is a known challenge and it will be an important consideration for humanity as the use of AI becomes widespread. As the LLM has been trained on a subset of the overall knowledge (that possibly exists on this planet), there is the likelihood of the typical bias that we often see in AI/ML models. When using it as a source of knowledge, one must remember that it is not a know-it-all that shows you "the truth". It responds based on the source data that it was trained with. Its response may not align with your beliefs and preferences if your beliefs do not align with the source data used to train the model.
For instance, this was its response to whether Britain's colonialism was good or bad.
Britain's colonialism had both positive and negative effects. On one hand, it brought economic development, Western education and Christianity to many parts of the world. On the other hand, it also caused displacement and loss of power for indigenous people, and the suppression of local cultures and traditions. The overall impact of colonialism is a complex and highly debated topic.
Now I do not want to start a debate, but not a lot of post-independence countries might agree with "the positive effects of colonialism", but this is the result of the source data used to train the model.
Going forward
I am very curious and I will continue to explore LLMs, and possibly share some fun and "good-to-know" stuff here.