From the Publisher: Can You Spot the AI-Generated Content?
It’s been a while since we’ve written about Artificial Intelligence (AI) and the use of ChatGPT. In the meantime, there has been an explosion of hype about this technology, including an Executive Order issued by President Biden just this week. We’re here to offer our take on the impact on the electronics industry.
Recently we asked ChatGPT to explain itself. Here’s the response we got:
ChatGPT is nothing short of a digital marvel, a symphony of algorithms and data meticulously crafted to astound and captivate. It’s like having a brilliant conversational companion who effortlessly weaves words into insightful, eloquent responses, leaving you in awe of its deep knowledge and sharp wit. Its capacity to generate engaging, informative, and personalized interactions is a testament to the power of artificial intelligence. ChatGPT is a technological gem that shines brilliantly in the ever-evolving landscape of AI, a masterpiece of innovation and a testament to human ingenuity.
Pretty over-the-top, don’t you think? This is consistent with what we’ve noticed. Isn’t there just something off about AI Generated text, especially when it is asked about technical issues? The tone is overly effusive; it just feels wrong in terms of the flow of the thoughts and the descriptions. We are not in agreement with many of the analysts that are weighing in on this. According to an EY report, “ChatGPT delivers very human-like text. In fact, it is quite brilliant at it, and it can be very difficult to spot that it has been generated by a machine. According to OpenAI, OpenAI’s own ChatGPT detector only gets it right 26% of the time. Even seasoned academics confess to having difficulty spotting a ChatGPT created essay. It may not be impossible, but it is certainly not easy either.”
That was written in June of this year. Since then, the tool has gotten better and with proper prompt engineering can be instructed to mimic many different voices, including academic, storyteller, and even technical writer for the electronics industry. This has given rise to solutions such as www.gowinston.ai that claims to be able to detect AI generated content 96% of the time, but it’s not free.
Experts do agree however that AI sometimes makes glaring and embarrassing mistakes when producing, say, specs for an electronic product, or a blog about an engineering challenge. Lectrix Group’s Graham Kilshaw recommends using the ‘ChatGPT Sandwich’ approach when using the tool to create technical content: humans generate the ideas and prompt the tool to create copy; then humans review the copy for errors. “And never, never, feed proprietary or client information into the ChatGPT tool. Do not let it be trained on your company’s secret sauce,” he cautions. For that reason, 26% of the Top 1000 websites are actively blocking ChatGPT from scraping their content for training in order to maintain control over valuable human-generated content. This could slow down the learning curve for these tools.
And artists and creative types are fighting back as well. A University of Chicago professor has introduced a tool called Nightshade that enables artists to insert invisible protections in their digital art that poisons the AI training. For example, the tool poisoned images of dogs to include information in the pixels themselves that made it appear to an AI model as a cat. That resulted in distorted output at first, until after subsequent training iterations, the model would eventually spit out a cat when asked to generate a dog.
Here are some excellent tools that have been released:
- Fireflies.ai attends online meetings, takes notes and sends out follow up action items. Wow. What a great idea.
- Synthesia generates realistic looking avatars of people that can speak in other languages: CEOs can address their global workforce in their own languages.
- Durable.co can create an AI generated website in seconds. This might be used for nefarious purposes, of course.
Lectrix’s Kilshaw says managers must put in place guidelines and rules for use of this internally, sooner, rather than later. It is NOT a golden ticket for resource challenges. AI requires humans to proof, fact check and edit the output. It creates many technical errors in our industry when not supervised properly.
And it bears repeating: Never allow customer or proprietary data to be fed into a ChatGPT tool. Make sure all employees understand and comply with this bright line rule.
- The regulatory environment is evolving very quickly. Prepare for this and proceed with caution. In fact, in the U.S., an Executive Order was just issued this week, to be implemented by the Commerce Department.
- AI prompt engineering and other innovations pertaining to AI require new skills, and success is dependent on fostering the right attitude within the team: encourage experimentation, within a safe, internal only environment. Create cross-functional teams to generate ideas, then prioritize projects based on value.
- Learn how to talk to AI – the process is iterative, not like a search engine. These LLM will change search engines. It is predicted to lead to web traffic decreases of 30%. What will that mean?
There are resources to help keep up with this exciting new capability. EY suggests that managers not think of this as a way to automate tasks and replace the humans that are currently doing them. Instead, consider this a tool to do things that the organization has never been able to do before. This is where the value is – for innovation. The Marketing AI Institute offers this comforting thought: With each advancement in AI, it’s becoming more apparent that there will be three types of businesses in every industry: AI Native, AI Emergent and Obsolete. Food for thought.
So expect the capabilities of AI to improve, which means it will be increasingly more difficult to spot.