Industrial Sector

ChatGPT in the industrial sector: game-changing or overhyped?

Given the meteoric success of ChatGPT, many within the industrial sector are beginning to look at what generative AI can offer their business. At the same time, there are concerns about high costs, inadequate data protection and a lack of tangible benefits. We outline some exciting and rewarding use cases while answering some of those key concerns. 

particle and wire frame network
8 minutes to read
With insights from...

Artificial intelligence (AI) is now commonplace in every sector – the industrial sector is no exception here. But, in recent months, there has been unprecedented hype about generative AI applications, particularly the launch of the language model ChatGPT. These technologies have an extensive range of uses and, as such, can potentially revolutionise the industrial sector.

What could these technologies mean for your business? We outline promising use cases here and highlight both the operational benefits, such as improved efficiency and cost optimisation, as well as the challenges, such as data protection. 

In the Zühlke blog post ‘ChatGPT: generate revenue, not just text!’, we explained what large language models such as ChatGPT are and how the technology works. We also discussed its potential initial uses and showed that large language models (LLMs) of this kind could offer businesses a broad range of benefits, from improving customer service and automating routine tasks to generating content.  

Generative AI can produce significant cost savings and boost your business’ efficiency, but it is not without its challenges. You need to ensure, for example, that the technology is used ethically and that the generated text is checked to ensure it is both accurate and appropriate.  

With this as our basis, we will now take a deeper dive and outline various concrete use cases, examining the advantages, challenges and technological issues involved in the use of generative AI models and large language models in the industrial sector.  

In broad terms, there are three main areas where large language models (LLMs) such as ChatGPT can be used:  

  • Ask your document: Search large quantities of data in the form of documents and content, and receive a natural language response to even the most complex of questions. 

  • Generate text: Generate content such as text or even programming code. 

  • Dialogue: Deploy as a user interface in internal and external communication. 

Knowledge database for more efficient processes and better customer retention

‘Ask your document’, in particular, opens up a broad range of possibilities in the industrial sector, speeding up and simplifying both internal and external processes. For example, a large language model could respond to an error message from a machine or to a freely worded question from an employee by providing the correct troubleshooting instructions from a database, including the right replacement parts and step-by-step instructions.  

As well as error messages, this example can also be applied to the composition of materials for process manufacturing, hazard warnings for transport, instructions for assembly processes, technical drawings, replacement part identification, certificates for materials or products, certificates of origin, sustainability information and much more.  

The ‘Ask your document’ approach can also support external business processes, particularly in the area of self-service. Given the increasing shortage of service engineers, a large language model based on an intelligent knowledge database could respond to customer enquiries and problems, and help to remedy issues by providing information on usage, error messages and troubleshooting, initial setup, and so on. 

Automated programming with AI-generated code

In the blog post ‘ChatGPT: generate revenue, not just text!’, we demonstrated that ChatGPT could be used for tasks such as generating service or marketing text. The exciting use case of using ChatGPT to generate code is likely to have a far bigger impact on the industrial sector, for example, in enabling machine controls to be automatically programmed. We have already seen initial ideas and approaches that involve using large language models to automate the programming of control systems for plants and machinery. Here, the employee simply outlines what they want the machine to do, and the AI independently generates the relevant code.  

But the possibilities do not stop there: large language models could be used to generate code for automated testing. A description of the desired tests would be enough to produce executable code here too. While these approaches are very exciting and promising, they are still in the exploratory phase and are raising major ethical questions regarding the use of AI-generated code. At Zühlke, we are working with customers to draft guidelines for the use of this type of code.  

Large language models as a single point of contact for employees

System dialogue is another compelling use case that could significantly boost efficiency and effectiveness in the industrial sector. It is conceivable that large language models such as ChatGPT could be used as a single point of contact within a business. Employees would no longer have to toggle between the different systems used for ERP, WMS or MES. From the users’ perspective, it would not matter which system they were currently working with, and they could simply interrogate the system and enter data by inputting questions or commands. Defined, login-based access rights would ensure that data input and the system’s responses can be restricted to appropriately authorised users. 

Such interactions can help employees with workflow management and allow personnel to be used more flexibly. If a staff member is absent at short notice, for example on health grounds, it would enable employees from other areas to step in and take care of certain activities, like in intralogistics or in service, with iterative support from the system. This would also apply to new employees or temporary staff, who can be trained up and productively deployed in far less time than is currently possible. 

Efficient knowledge management, cost-optimised processes and better customer retention

The use cases described above clearly show that the industrial sector can benefit from generative AI. Use cases in the ‘Ask your document’ and ‘Dialogue’ categories not only enable more efficient searches of the business’ own data: in the process, they create a new dimension in knowledge management, too. Queries can be made in natural language with less dependency on keywords, and the models provide context-dependent results and allow iterative queries of the data via the chat feature.  

Large language models do not just boost efficiency and lower costs for internal and external processes, they also help businesses to respond to demographic change and the skills shortage, for example by making work easier for service staff. At the end of the day, when information in (self-)service is provided faster and with greater precision, this has the knock-on effect of improving the business’ customer retention and customer satisfaction. Code generation concepts already in place or in development enable businesses to accelerate, simplify and automate their processes, leading to increased efficiency and cost savings. 

Data protection, costs and quality – the challenges of using generative AI

It should be noted that using large language models such as ChatGPT is not without its challenges. Below we examine and assess the most important issues. 

One key aspect here, without a doubt, is data protection. As a rule, businesses’ sources and documents tend to be highly specific, like service instructions for products, or even confidential, like process or function descriptions. It is crucial that these data and texts are not openly accessible via ChatGPT. So, are businesses required to build their own AI model? 

In principle, it is possible for a company to develop their own AI model. But, at the current time, this will not be a cost-efficient course of action for the vast majority of businesses and, as such, neither an expedient nor necessary move. The use cases mentioned, however, can be effectively covered by an in-house system that augments the OpenAI and Microsoft AI models. In this scenario, a business can benefit from the impressive abilities of these models without making sensitive information publicly available. 

Internal company data can be indexed and input into the AI model with the question to improve or extend the existing large language model. In addition, traditional access control can ensure that sensitive data and documents are only searchable by authorised companies or employees, ensuring that technical and process-related data are protected.  

Our experience with personal data has also been positive, as the same standards apply here as with cloud storage. The processing for Microsoft’s version of ChatGPT takes place at Azure Cognitive Services in the Netherlands, within the EU. All models have been cloud-based so far, but rapid progress in terms of open source makes on-premises solutions a realistic prospect for the near future. 

Fees are charged for inputting your own documents and building a proprietary knowledge database, but such costs are much lower than the cost of building your own model. Large language models charge according to usage. In the case of ChatGPT, the fee is €0.0018 per 1,000 words for the initial one-off expense of establishing the knowledge database. On top of this, there is the cost of ongoing updates to the database and subscription costs, but these are not the main cost drivers. 

One final question that businesses often ask is about the accuracy and quality of the responses. As with human employees, there is no 100% guarantee that the responses provided are correct. However, it is easy to check the accuracy of the information found because the sources are provided. As with the introduction of any new technology, it is absolutely vital to adapt the underlying processes. 

Generative AI: ignore the hype – generate added value

There’s no denying that ChatGPT has unleashed a lot of hype about the potential of large language models – hype that is not always fully justified. But it is also clear that businesses that make optimum use of what it offers are benefiting from increased efficiency and effectiveness, saving time and money, and generating new sources of revenue.  

It is not always about creating new, generative use cases, existing use cases can just as well be improved too. Large language models will have a lasting impact on industrial processes. But if the models are really to become game changers for the industrial sector, businesses need to start exploring the technology and its potential now.  

We are working on new ideas and developing use cases constantly to assist our customers with new technologies. If you have an idea for a specific use case or a question about using AI in your business, feel free to get in touch! 

Philipp Morf
Contact person for Switzerland

Philipp Morf

Head AI & Data Practice

Dr. Philipp Morf holds a doctorate in engineering from the Swiss Federal Institute of Technology (ETH) and holds the position head of the Artificial Intelligence (AI) and Machine Learning (ML) Solutions division at Zühlke since 2015. As Director of the AI Solutions Centre, he designs effective AI/ML applications and is a sought-after speaker on AI topics in the area of applications and application trends. With his many years of experience as a consultant in innovation management, he bridges the gap between business, technology and the people who use AI.

Contact
Thank you for your message.
Dan Klein Zuhlke
Contact person for United Kingdom

Dan Klein

Global Chief of Data & AI

Dan is the Global Chief of AI & Data and has extensive experience working across a diverse range of sectors, including government, transport, telecoms, and manufacturing. As a skilled engineer and strategic advisor, Dan effectively connects the needs of leadership with the technical expertise of teams to successfully drive data transformation initiatives for organisations. He brings a unique combination of strategic thinking and deep knowledge of data and engineering to his consulting work. 

Contact
Thank you for your message.
Philipp Morf
Contact person for Germany

Philipp Morf

Head AI & Data Practice

Dr. Philipp Morf holds a doctorate in engineering from the Swiss Federal Institute of Technology (ETH) and holds the position head of the Artificial Intelligence (AI) and Machine Learning (ML) Solutions division at Zühlke since 2015. As Director of the AI Solutions Centre, he designs effective AI/ML applications and is a sought-after speaker on AI topics in the area of applications and application trends. With his many years of experience as a consultant in innovation management, he bridges the gap between business, technology and the people who use AI.

Contact
Thank you for your message.
Barbara Hotwagner
Contact person for Austria

Barbara Hotwagner

Managing Director Technology

Barbara Hotwagner is particularly passionate about sustainability and diversity. As a Managing Director of Technology and a member of management at Zühlke Austria, she is responsible for the company’s strategic orientation. A key part of her role is to coordinate the teams responsible for project implementation. Previously, she held various positions at other IT firms and has more than 20 years of in-depth experience in this sector.

Contact
Thank you for your message.
Steve Nunez
Contact person for Singapore

Steve Nunez

Former Head of Data & AI, Asia

Steve Nunez is Zühlke Asia's former Head of Data & Artificial Intelligence. Beginning his career at the NASA Artificial Intelligence Laboratory at Kennedy Space Centre, Steve has more than 25 years of experience leading AI and data science teams. Prior to Zühlke, Steve led the professional services and technical pre-sales teams for a big data vendor, providing intelligent solutions in the finance, insurance, and government sectors.

Thank you for your message.