Nip Impressions logo
Thu, May 29, 2025 23:37
Visitor
Home
Click here for Pulp & Paper Radio International
Subscription Central
Must reads for pulp and paper industry professionals
Search
My Profile
Login
Logout
Management Side

AI Danger

By Pat Dixon, PE, PMP

President of DPAS, (DPAS-INC.com)

I have recently been considering the potential danger of artificial intelligence (AI). I have been watching interviews of people like Marvin Minskey (co-founder of MIT AI laboratory), Geoffrey Hinton (the godfather of AI), and others. Of course, anyone that has watched any Black Mirror episode would be afraid. Considering that AI is a significant attribute of my profession and credentials, I should consider the concerns sincerely.

In a prior article I mentioned the uncertainty in defining AI. The definition I will use is any computer application that passes the Turing test. Alan Turing was a brilliant mathematician that, among his many accomplishments, includes the origin of this AI definition. Any computer application which produces results indistinguishable from human intelligence or better is artificial intelligence. That is the Turing test. One of those applications is automation. If you can't tell whether a really smart operator or a computer is controlling your process, it qualifies as AI (in my opinion).

If we agree to that definition, what is the danger? They consist of:

  • Unemployment: if computers/machines can replace humans, are cheaper, and more productive, then humans won't have a way to earn income.
  • Subservience: if computers become smarter than us, they can make us slaves in the same way we use cows to give us milk and burgers.

These are frightening concerns, and people like Hinton are warning us that we are getting close to a tipping point. They say now is the time to get AI under our control before it gets away from us.

Another prominent actor in AI is Ray Kurzweil, who was the subject of one of my prior articles. Kurzweil calls the tipping point the "singularity". In the opinion of Kurzweil, he sees tremendous benefit from AI that will make humans more powerful. He doesn't seem to share the concerns of Hinton.

I am not a futurist. I have read authors like Alvin Toffler who seem to have an impressive way of reading trend lines and predicting where they will go in the future. I am much less confident in my prognostication. However, I do have some opinions.

To begin, I am going to constrain the domain to industrial automation. What is not universally understood outside of manufacturing industries is that the AI techniques that apply to chatbots, finance, and social media are not always the best when applied in industry. Methods such as Neural Networks can have a place, but there are inherent problems when Neural Networks try to overfit noisy data that has not been pre-processed by humans. Results can be disastrous when applied to closed loop control. We have first principles that tell us steady state models rarely have more than 1 inflection point, and dynamic models don't need more than 2nd order dynamics with lead and deadtime accounted for. We know what the models should look like. Tried and true methods with first principle foundations qualify as AI, but have been time tested and are not scary.

If we step outside of our industrial domain, we need to ask if there is any attribute of humanity that computers cannot achieve. In "Algorithms to Live By" (Christian and Griffiths), it is made clear that the AI techniques that fail are those that try to be too perfect. The techniques that are successful more closely approach our human imperfections. That suggests that there is an attribute of humanity that is asymptotic. That attribute is creativity.

The way I explain this to students at Miami University is as follows:

  • Premise 1: There will always be problems to solve
    • Corollary 1: There will always be a demand for solutions to problems
  • Premise 2: Some problems require creativity (https://www.businessinsider.com/accidental-inventions-that-changed-the-world-2014-5)
  • Premise 3: Humans are the ideal creative machine
  • Thesis (Corollary 1 + Premise 2 + Premise 3) = Machines will not entirely replace human labor
  • If you want to make the world better, have a baby

Of course I could be wrong, but since that has never happened that seems improbable.

I think we should be cautious about any technology. Just like nuclear weapons and recovery boilers, there is potential for AI to be dangerous. If you have competent and enlightened people to work with when you are applying AI, your will be safe and profitable.



 


 Related Articles:


 


Powered by Bondware
News Publishing Software

The browser you are using is outdated!

You may not be getting all you can out of your browsing experience
and may be open to security risks!

Consider upgrading to the latest version of your browser or choose on below: