Less noise, more data. Get the biggest data report on software developer careers in South Africa.

Dev Report mobile

CouldBe Email: How I used AI to turn meeting agendas into concise email summaries

30 June 2023, by Mmontsheng Maoto

Unproductive meetings can consume valuable time and hinder productivity. By making use of AI, we can turn unnecessary meetings into streamlined emails. Here is how I built CouldBe Email, a GPT-based application that creates emails from meeting agendas and topic points.

OfferZen_CouldBe-Email-How-I-used-AI-to-turn-meeting-agendas-into-concise-email-summaries_inner-article

Why I build with AI

Ever since OpenAI introduced ChatGPT last year, the buzz surrounding AI has reached new heights. It seems like everyone has been caught up in the excitement, including myself - I, too, felt the urge to dive into AI and create something exciting.

I am a full stack developer who loves creating things in my spare time, and I saw the possibilities of AI to be almost limitless. With this in mind, I decided to explore and experiment with AI by first developing a recipe generator app using GPT. This was my first GPT-based project, and it helped me build CouldBe Email.

There are several reasons why we might want to use a tool like AI to build apps:

  • GPT has revolutionised AI accessibility

With the introduction of user-friendly APIs, even non-technical individuals can effortlessly engage with AI systems. This change has opened doors to a wider audience, allowing us to harness the power of AI for content creation, language processing, and more.

  • Impressive responses

What strikes a lot of people when working with ChatGPT and other new AI tools is how well their responses are written. From simple prompts, you can get comprehensive and complex answers written in any style you like.

  • Saves time

Instead of building complex language models from scratch, we can make use of GPT's pre-trained models, which have been trained on vast amounts of text data.

  • The GPT API can be applied to a wide range of domains and use cases

Whether it's building chatbots, virtual assistants, content generators, recommendation systems, or language translation tools, we can leverage the power of the GPT API to enhance some of these applications. For example, a real estate chatbot can assist with property comparisons, offering valuable options based on user preferences.

Why I created CouldBe Email

I was scrolling through Twitter one evening, and I came across a tweet from one of the guys I follow. He was writing about things that could be improved through automation, such as replacing meetings with emails.

This made me think about the meetings that I have been to which weren't necessary at all. Meetings like status updates, which could be easily communicated through writing. I then thought to myself perhaps GPT could help turn meeting agendas into concise emails that would have the same outcome as the intended meeting. This is where the idea for CouldBe Email was born.

CouldBe Email focuses on automating the transformation of meeting topics into succinct email summaries. The app uses GPT API to allow a user to input a meeting agenda or topic points and produces a ready-to-go email in a variety of tones that can be sent to all meeting participants.

This offers a solution to the issue of ineffective meetings and provides a better approach to cooperating and sharing knowledge. By utilising a tool like CouldBe Email, we can improve our time management and productivity, enabling us to dedicate more attention to our essential responsibilities.

How I built CouldBe Email

Step 1: Using OpenAI's playground interface

OpenAI has a Playground interface, which makes it simple to test prompts and become acquainted with the API. It provides a text-based chat interface where users can input prompts or questions, and the model responds accordingly.

The Playground also allows users to tweak various settings, such as temperature and max tokens, to control the response generation. It's a great tool for experimenting with the model and exploring its capabilities.

The OpenAI Playground is a powerful tool for both technical and non-technical users who want to engage with GPT-3 as well as for developers who are building applications based on the OpenAI API. Its accessible and interactive nature makes it an invaluable resource for exploring and harnessing the capabilities of GPT-3 without the need for extensive coding knowledge. I was able to use GPT-3 without writing a single line of code, making it easier and faster to build my CouldBe Email app.

Step 2: Choosing the right playground mode

The Playground has different modes to choose from, including:

  • Chat:

This mode lets you have a chat-like conversation with the model. You start with system input, where you set up the instructions on how the AI model is to respond to user messages. Then you can type in a message as user input, and the AI assistant responds. This mode makes it seem like you're having a friendly conversation, and it works great for creating chatbots, generating dialogue, or getting some helpful tips.

Mmontsheng-Screenshot-1

  • Completions:

This mode allows you to provide a prompt or statement as input, and the model generates a completion based on that prompt. The prompt can be a few words or a full sentence. The model then generates a continuation or completion of the provided text, such as generating summaries of longer texts, expanding on ideas, composing creative pieces, or even offering suggestions based on the given prompt.

Mmontsheng-Screenshot-2

CouldBe Email makes use of chat mode. Chat mode gives you finer control over how GPT/the model can behave using system parameters. In essence, what this mode allows you to do is have two inputs: system and user, as you will see in the next step.

The same output could also be achieved with completion mode, except you will have to provide the system instruction at the same time as the user input - much like how ChatGPT works. The text could look like: "You are an employee who hates meetings. Convert this sales pitch meeting agenda about a new skin product to an email".

Another difference between chat and completion is cost. Completions are a little bit more expensive depending on the model that you choose. You can read more about the models and find out which one might be the best for your use case.

The main reason why I used chat mode was because of the ability to set context. Generally, the chat mode has been optimised for chat, so its ability to have a conversation-like interaction with the AI models opens possibilities for CouldBe Email with minimal effort. I could easily ask the model in plain English to generate a Teams or Slack text version of the email without having to set the context.

Step 3: System and user input

Once I'd decided to use chat mode, I needed to create the prompts that would be needed for the system. For example, the system input could be something like: "You're role-playing a colleague who hates meetings and prefers to communicate via email. You have to convert meeting subjects, agendas, and titles to email content without ever suggesting a meeting yourself. Stay in character". This will make GPT think and behave like a colleague that prefers email.

Now, when a user inputs a meeting agenda, the AI will respond appropriately and provides the user with a drafted email that does not suggest a meeting.

Mmontsheng-Screenshot-3

Step 4: Tweaks to the system - learnings

The model tends to be repetitive and can also give short responses, but I tweaked this by using temperatures, maximum length, and penalties to control the output generated by the language model:

  • The temperature parameter adjusts the randomness of the generated text. A higher temperature value, such as 1.0, leads to more diverse and creative responses, while a lower value, like 0.5, produces more focused and predictable outputs. CouldBe Email works best with a temperature of 0.7. This keeps the responses focused while allowing for some level of creativity.
  • The maximum length sets the length of the response in tokens. By setting a specific token limit, I can ensure that the model doesn't produce excessively long emails or messages, helping to maintain a desired length for the output. This is particularly useful when integrating the language model into applications with character or word constraints - such as Slack. Tokens are words broken down into pieces.
  • The frequency penalty parameter, often referred to as the "repetition penalty", discourages the repetition of words or phrases in the generated text. By increasing the penalty value, such as 1.2 or 1.5, the model becomes more inclined to generate varied responses, reducing the chances of repetitive output. For CouldBe Email, repetitions don't matter, so I set the parameter to 0.

By adjusting these parameters, I could fine-tune GPT to suit my specific needs, balancing creativity and coherence, controlling the output length, and reducing repetitive patterns. These parameters provide a level of control over the language model, enabling customisation for various applications and contexts.

Step 5: Making it user-friendly

After I decided to make use of the Playground, I was ready to turn the idea into a production-ready application.

I created a UI using Vue.js and an API using Node.js Express. The UI contains two text inputs: one is for the user to enter their meeting agendas or title, and the other is to set the tone of the email. The tone is basically the attitude you want to present to the recipient.

Upon a click of a button, the textbox inputs are sent to the API for processing before being sent to OpenAI APIs. Once the result is returned, a perfectly laid out email version of the meeting agenda is displayed on the screen for the user to see.

Mmontsheng-Screenshot-4

You can give the application a try here.

Note: To make use of OpenAI's API, you need an API key. This key is secret, and as such, it should not be stored on the frontend. It can be tempting to hit an API directly on the frontend with secret private keys due to convenience and simplicity. By doing so, developers might think they can save time by bypassing backend servers and directly accessing the API resources, but the problem is it becomes easy for anyone who can inspect the network traffic to get your API key. I cannot emphasise this enough: do not hard code the keys on the frontend.

The results of the project and how successful it is

The results of the CouldBe Email project have been successful. The application has demonstrated its ability to save time and increase productivity.

Thanks to the implementation of Splitbee, a web analytics tool, I've been able to track and analyse user interactions on the website with precision. From this, I've been able to see that my app has clearly been well-received.

Learnings and advice from using AI

As AI continues to evolve and AI-powered solutions become more prevalent, CouldBe Email serves as a promising example of how AI can revolutionise traditional communication methods, ultimately leading to more efficient and productive work environments.

Through this project, I've picked up several useful learnings. Below are my observations and advice for others looking to leverage AI in their applications.

The power of AI to understand and generate human-like text

One of the key learnings from using AI in CouldBe Email is the remarkable ability of AI models, such as GPT-3, to understand and generate human-like text. The advancements in natural language processing have opened up new possibilities for automating complex tasks and improving communication efficiency.

AI has the potential to revolutionise various industries by augmenting human capabilities and streamlining processes.

The cost of AI can be expensive

It is crucial to consider the cost implications associated with AI usage. AI models like GPT-3 can be resource-intensive and may come with a significant price tag. The cost of utilising AI systems typically varies based on several factors, including the complexity of the AI model, the scale of usage, and the specific AI platform or service provider.

As such, careful cost management should be an essential consideration for any AI-based project. It is crucial to monitor and optimise API usage to avoid unexpected expenses.

Understanding the pricing structure and implementing cost-control measures can help manage the financial aspect of AI integration effectively; read more about the pricing structure here. I have also found this pricing calculator to be useful.

Implement safeguards on AI usage to keep costs down

It is important to be aware of the potential for user abuse leading to exponential cost increases. AI models are designed to generate responses based on user input, and without proper safeguards, users can inadvertently or intentionally abuse the system, leading to excessive API usage and skyrocketing costs.

Implementing rate-limiting methods, setting usage quotas, or introducing subscription plans can help mitigate these risks and maintain cost stability while ensuring a fair and sustainable user experience.

Conclusion

The use of AI in content creation and language processing has revolutionised the way we interact with technology. GPT and OpenAI's API have made AI capabilities easily accessible to individuals without technical expertise. CouldBe Email shows how AI can optimise communication processes and make them more effective and efficient.


Mmontsheng Maoto is an Intermediate Java Developer at AfriGIS, a renowned provider of geospatial services based in Pretoria. He actively contributes to the development and improvement of innovative geospatial applications to help clients use location-based data to make informed decisions.

Read More

OfferZen_AI_blog_banner

Recent posts

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.