Generative AI - learnings from Microsoft Courses

My learnings from the Imagine Cup Cloud Skills Challenge

Generative AI is one of the most powerful advances in technology ever. It enables developers to build applications that consume machine learning models trained with a large volume of data from across the Internet to generate new content that can be indistinguishable from content created by a human.

A Good example of generative AI is ChatGPT that is taking the world by storm. This article and a series of articles coming ahead are my learning from Microsoft Learn Imagine Cup Cloud Skills Challenge modules.

Let's start with how can we ensure that AI doesn't harm us:

to ensure that, we must follow these 3 steps before developing any kind of AI solutions:

  1. identify potential harms that are relevant to your planned solution.

  2. Measure the presence of these harms in the outputs generated by your solution.

  3. Mitigate the harms at multiple layers in your solution to minimize their presence and impact, and ensure transparent communication about potential risks to users.

  4. Operate the solution responsibly by defining and following a deployment and operational readiness plan.

What is AI?

Simply put, AI is the creation of software that imitates human behaviors and capabilities. Here are some of the use cases of AI:

  • Machine learning - This is often the foundation for an AI system, and is the way we "teach" a computer model to make predictions and draw conclusions from data.

  • Anomaly detection - The capability to automatically detect errors or unusual activity in a system.

  • Computer vision - The capability of software to interpret the world visually through cameras, video, and images.

  • Natural language processing - The capability for a computer to interpret written or spoken language, and respond in kind.

  • Knowledge mining - The capability to extract information from large volumes of often unstructured data to create a searchable knowledge store.

Understanding natural language processing

Natural language processing (NLP) is the area of AI that deals with creating software that understands written and spoken language.

NLP enables you to create software that can:

  • Analyze and interpret text in documents, email messages, and other sources.

  • Interpret spoken language, and synthesize speech responses.

  • Automatically translate spoken or written phrases between languages.

  • Interpret commands and determine appropriate actions.

Understanding knowledge mining

Knowledge mining is the term used to describe solutions that involve extracting information from large volumes of often unstructured data to create a searchable knowledge store.

What is generative AI?

Artificial Intelligence (AI) imitates human behavior by using machine learning to interact with the environment and execute tasks without explicit directions on what to output.

Generative AI describes a category of capabilities within AI that create original content. People typically interact with generative AI that has been built into chat applications. One popular example of such an application is ChatGPT, a chatbot created by OpenAI, an AI research company that partners closely with Microsoft.

Generative AI applications take in natural language input, and return appropriate responses in a variety of formats such as natural language, images, or code.

Large language models

Generative AI applications are powered by large language models (LLMs), which are a specialized type of machine learning model that you can use to perform natural language processing (NLP) tasks, including:

  • Determining sentiment or otherwise classifying natural language text.

  • Summarizing text.

  • Comparing multiple text sources for semantic similarity.

  • Generating new natural language.

What is Azure OpenAI?

Azure OpenAI Service is Microsoft's cloud solution for deploying, customizing, and hosting large language models. It brings together the best of OpenAI's cutting edge models and APIs with the security and scalability of the Azure cloud platform. Microsoft's partnership with OpenAI enables Azure OpenAI users to access the latest language model innovations.

What are copilots?

The availability of LLMs has led to the emergence of a new software application, often referred to as a copilot. Copilots are often integrated into other applications and provide a way for users to get help with common tasks from a generative AI model. Copilots are based on a common architecture, so developers can build custom copilots for various business-specific applications and services.

For example, Microsoft edge is a copilot.

Conclusion

Generative AI has the potential to revolutionize the way we work and live, but it is crucial that these systems are developed and used in a responsible manner. Here are the key steps to ensure generative AI is safe and ethical:

  1. Identify potential harms - The first step is to identify potential harms that are relevant to your use case of generative AI. This could include privacy risks, bias, misinformation, plagiarism, etc.

  2. Measure model outputs for harms - You need to measure the presence of potential harms in the outputs generated by your generative AI model. This can be done through human evaluation, automated detection tools, or a combination of both.

  3. Mitigate harms at multiple layers - Harms should be mitigated at multiple layers of the system - from the data used to train the model to the final outputs. This includes techniques like data sanitization, bias mitigation, human oversight, and model constraints.

  4. Create a trusted environment - It is important to provide access to generative AI applications in a controlled manner that minimizes the risk of data leakage and intellectual property theft. This could involve creating custom interfaces, data sandboxes, and employee training.

  5. Be transparent about data sources - Companies need to be transparent about the data sources and potential flaws or bias in the data used to train generative AI models. This transparency helps build trust in the outputs.

  6. Use human-in-the-loop review - Combining human judgment with generative AI outputs can help catch errors, detect bias, and ensure responses are safe and ethical before being deployed.

In summary, a responsible approach to generative AI starts with identifying potential harms, measuring and mitigating them at multiple levels, providing transparency around model inputs and outputs, and using human oversight where needed. With the right governance strategies in place, generative AI can drive innovation while minimizing risks to users.

#wemakedevs