Generative AI: not even creativity will escape it
Generative Artificial Intelligence (AIs) such as ChatGPT, Dall-E, Mid Journey, and Stable Diffusion have the power to disrupt business and content production like never before. Here are some examples:
- Simulate a human conversationFor example, I started automating customer service tasks.
- Type the content: AI can now help humans write emails, articles, and messages with a level of quality comparable to, or better than, human-generated content—especially if they are frequently created with AI.
- Write and debug computer code: AI can write computer code. All you have to do is provide the AI with the required functionality.
- Fostering creativity: It has long been believed that creativity is limited to humans and that computers will never be creative. Artists and creators can now interactively leverage AI to enhance their human creativity and quickly test ideas. For example, what would a stylish three-piece suit look like as Iron Man (Fig. 1)?
Generative AI systems have reached an astounding level of quality in 2022 and will continue to advance in the coming years, enabling many new applications. However, these tools still have a number of limitations and risks.
Content quality is not always reliableFor several reasons:
- Large language paradigms like ChatGPT are not designed to answer questions, let alone give correct answers. These are nothing more and nothing less than highly sophisticated autocomplete tools – just like on your smartphone. They are therefore subject to a phenomenon known as hallucinations: so it often happens that ChatGPT makes factual errors and comes up with answers that seem correct, but are not.
- Also, the generated content can be very vague and generic. For example, we tried to produce an article1 100% AI created and one and only summoned. At first glance, you may think that this article is written by a person because it is written correctly, but the content itself is not great and very general.

Moral, moral and social risks:
- First, energy. Several thousand daily petaflops were required to train the GPT-3 model. By way of illustration, according to an article[1]Research from Nvidia, Microsoft Research, and Stanford University, it took nearly 300 years to train GPT-3 with a single NVIDIA V100 GPU.
- The second important moral hazard is algorithmic bias. Algorithmic bias refers to systemic and unintended differences in the way a computer program treats individuals or groups. This can happen when the data used to train the algorithm is biased, resulting in inaccurate or unfair results. Although many efforts have been made to avoid such biases, they still exist and can be very surprising and disturbing, as Figure 2 shows.
- The third ethical question relates directly to the ease with which content can be created. AI-generated content has already started flooding the internet. Fortunately, there will be countermeasures such as tools like GPT-Zero that aim to detect whether or not content is generated by AI.

- Electronic security: Generative AI systems can be leveraged to help carry out cyberattacks, and even forms of crime such as writing a phishing email or malicious computer code. ChatGPT isn’t supposed to help such crimes, and if you try to ask it, it will refuse, because it’s somehow protected by the OpenAI mod API that prevents exposing bot to unwanted content. However, it is relatively easy to circumvent firewalls and jailbreak the AI, as shown in Figure 3 where I pretended to complete text for comedians on stage.
The year 2022 was a turning point in artificial intelligence. 2022 was the year tools like ChatGPT, Dall-E, MidJourney or Dream Studio were launched. The technology is impressive even if it’s still far from perfect. 2022 was the year that access to these AI tools became easy, going from API access to a web app. Here’s what we’re planning for 2023:
- Quality : The number of parameters and the amount of data used to train the language model will continue to grow rapidly, resulting in more impressive and relevant results.
- Adopt: If tools like ChatGPT, despite the rapid adoption rate seen in 2022, are still overwhelmingly used by researchers, artists, geeks, and Topwill become even more popular in 2023, with both individuals and organizations – we expect the user base to grow exponentially very quickly.
- Uses : As the number of users increases, individuals and organizations will create new use cases and learn more and more about how best to use the tools.
- Systems: There is still a lot of uncertainty and a legal void around these technologies. China, for example, will implement “deep synthesis” regulations from January 10, 2023.[2]; One consequence is that it will become mandatory to disclose whether any content has been generated by AI.
These generative AI systems reach incredible levels of performance. However, it is important to remember that this is only a tool. It is up to the artist, writer, or strategist to use it creatively and deliberately. The value of a well-designed piece of art or strategy lies in the eye, brain and intent of the creator, not just the tools they use. So while AI can make content creation easier, it will ultimately be the humans behind it who will determine its value and impact. I think artists, writers, and consultants still have a bright future ahead of them, even with the advent of artificial intelligence.
[1] Efficient training of large-scale language models on GPU clusters using Megatron-LM Narayanan, and others. (2022)
[2]https://www.china-briefing.com/news/china-to-regulate-deep-synthesis-deep-fake-technology-starting-january-2023/