Skip to main content

News feed

"It would be a disservice not to train my students how to use AI"

2025. 03. 27.
Venkat Venkatasubramanian

Mr Venkat Venkatasubramanian, Professor of Chemical Engineering at Columbia University and member of the National Academy of Engineering, recently visited BME and gave a talk about the use of artificial intelligence in chemical engineering. After the talk, he spoke to bme.hu on his research area.

What brings you to Hungary and BME?

I came to Hungary to attend a commencement at the Széchenyi István University of Győr. The university awarded me an honorary doctorate, so I had the pleasure of attending the commencement exercises there three days ago. My wife had not seen Budapest before, but I had, and I loved the place. So, I told her, let's spend a couple of days in Budapest before heading back to New York. When our plans for Budapest got fixed, I reached out to Mr Botond Szilágyi to let him know that I was visiting, and he asked me whether I could give a talk. I was happy to give a talk on AI, which is my research area.

In which specific areas of the chemical industry is AI currently being successfully applied?

As I mentioned in my lecture, the use of AI in chemical engineering has been going on for about 40 years, somewhat quietly. In the last five years, it has received much more publicity everywhere, not just in chemical engineering. So, machine learning, the current popular version of AI, is used in chemical industries everywhere: design, control, optimization, and process safety. I myself work on AI for pharmaceutical manufacturing, so there's a lot of work going on there in pharmaceutical development. And it's just the early stages; we are still figuring out it doesn't work uniformly well everywhere. In the next five to ten years, it's going to be very exciting.

Venkat Venkatasubramanian earned his Ph. D. in chemical engineering at Cornell, MSc in physics at Vanderbilt, and BSc in Chemical Engineering at the University of Madras, India. Taught at Purdue University for 23 years before joining Columbia University in 2011, where he directs the Complex Resilient Intelligent Systems Laboratory. He is a complex-dynamical-systems theorist interested in developing mathematical models of their structure, function, and behavior from fundamental conceptual principles. His research interests range from AI to systems engineering to theoretical physics to economics, but they are generally focused on understanding complexity and emergent behavior in different domains.

Do you already see what can be the main obstacles to the widespread adoption of AI in the industry?

Potentially, there are three areas of challenges. One is the current version of large language models. In certain non-scientific and non-engineering domains, they are doing reasonably well. But in deeply scientific, deeply engineering domains the gaps in their knowledge show up. Another problem is the hype: overpromising what these systems can do in industry. I try to let people know it cannot do everything right yet, so they shouldn't believe all the hype. I've already lived through two hype cycles. In the ’80s, I remember the expert systems, and in the ’90s, the neural networks both overpromised and underdelivered. A similar thing is happening now. And the third one is that we don't have enough chemical engineers doing AI yet, so there's a certain amount of skill development that is needed. And that's going to take some time.

Venkat Venkatsubramadian

And how reliable is machine learning in terms of chemical product development?

It depends on the actual application and the quality of the data. Where you have lots of reliable data, they tend to do better. The problem is that while machine learning software can learn chess by playing millions of times and failing most of the time, in most cases, that is not an option in chemical engineering.

To control chemical plants, I cannot blow up a plant a million times and learn how to control it better.

In fact, I've spoken to my friends in the industry, and they will not let me blow up the plant even once! So, we have to come up with other ways of doing it. That's where hybrid AI comes in. You must marry first principles knowledge using symbolic AI techniques with machine learning. That’s what my group is developing.

What about the black box problem in an environment where transparency is highly important?

That goes with the same hybrid nature. Right now, AI systems cannot explain their answers, so there is no transparency, although explainable AI has always been an important challenge. One way to address this is to put in more knowledge, and that's where symbolic AI techniques come in. We need that, particularly for engineering applications.

Are there examples of AI helping reduce the environmental impact of the industry?

People are working on sustainability applications. The AI systems, like LLMs themselves, are not environmentally friendly because they consume so much energy. However, people are deploying AI systems to address sustainability concerns, and eventually, it would help.

And how is AI changing the role of engineers? Can you imagine that in some time it will replace human developers?

In the next ten years, I think we'll still need humans for supervision, but they may not have to do too much of the drudgery work. So, it might reduce the number of human engineers or operators needed by a factor of two. Beyond that, it's hard to make any predictions.

How do you think we should incorporate AI into engineering education?

I've been teaching AI to chemical engineers for about forty years. Until last year, every time I taught my course, I gave homework assignments, which included big programming assignments. There wasn't any system that could have helped my students write those programs; they had to learn to do it. But last year, that changed because Chat GPT became available, and it is actually very good with programming help. So, I had a dilemma: do I tell my students they cannot use Chat GPT or not? I realized that when they graduate, the companies that hire them won't care how they solve the problem as long as they solve it correctly, quickly, and cheaply.

Use Chat GPT or talk to your grandmother – they don't care; they need solutions.

So, I decided it is not productive to ban them from using Chat GPT; it would be a disservice not to train them how to use it. I told them, "You can use it to do my homework assignments, except I want to see the entire dialogue." I want to know what kind of questions they ask, what kind of answers Chat GPT gives and what answers are right, what answers are wrong, and how the students adapt to changing their prompts. That has been really helpful for me in understanding how students interact and what it is able to do. Right now, that's how I'm using these tools, but five years from now it might be something very different because everything is changing.

What did you actually learn from those dialogues you got to read?

The main thing I learned was that the smart students, the ones who did well in my course, understood what the question was about, what I was asking them to solve, and what Chat GPT could do. They asked smarter prompts, which basically guided the system better. The other ones were stumped; you could tell that from the dialogue. It was like having an interview.

Venkat Venkatsubramadian előad

Do you think artificial intelligence will radically change the process of pharma product development?

Yes, it will happen in ten years, maybe five. More or less, every industry will be revolutionized; this one is no exception.

Are we prepared for that to happen?

No, we, as individuals, are not prepared. We, as a society, are definitely not prepared. Our politicians are probably the most ill-prepared, and our legal system is not prepared either. We will need new laws.

What are the main ethical and safety issues?

It's more or less well-known that our privacy is going to be abused by many parties, including the government. That's why we need new regulations. My postdoc advisor, Geoffrey Hinton, who won the Nobel Prize for his AI work in December, says 

we must treat these AI systems like biological viruses.

There are many biohazard regulations about who can use them and how to contain them. He compares the threat with nuclear and biological weapons. I don't necessarily agree with all the details of what he's saying, but I support the general concern and the idea that people should pay attention.

As an expert in your field, have you ever been asked by politicians or regulators for advice?

Those inquiries are beginning to happen. I'm hesitant to participate because I often don't know what the answers are. So, I'm trying to get some clarity.

You can still give good hints.

Yes, I can point out a hint. But I won't be saying anything new compared to what Geoff Hinton and his colleagues are warning about. When I’m asked, I usually point at them. Go listen to them; they've been talking about it for years.

pg

 

Before the interview, Professor Venkatasubramanian met BME Rector Hassan Charaf in his office to have a conversation, accompanied by Mr Botond Szilágyi, research fellow at the Department of Chemical and Environmental Process Engineering:

Venkat Venkatsubramadian, Szilágyi Botond, Charaf Hassan