AI Is Here to Stay. What Are Its Promises? What Are the Threats?


Image: Open AI, Jason E. Kaplan and Joan McGuire
This image was generated by Open AI’s DALL-E 3 program, using a photograph taken by staff photographer Jason E. Kaplan as prompt. The original photo appears below.

Oregon Business convened a group of thought leaders in the field to talk about what we know (and don’t know) about artificial intelligence — and what business leaders and policymakers should be thinking about as the tech accelerates.

Share this article!

Artificial intelligence dominated headlines throughout 2023 and into 2024. In November OpenAI CEO Sam Altman was ousted from his role, only to be rehired less than a week later. And just two weeks into the year, more than 5,000 workers in the tech sector had lost their jobs, with tech giants like Google attributing the cuts at least partially to AI, which is already allowing companies to automate jobs previously performed by human beings. 

Policymakers have put an ear to the ground on AI, too. In October President Biden issued an executive order establishing standards for AI safety and security and for protecting individual digital privacy, and in December Gov. Tina Kotek signed an executive order to create an advisory council that will guide the state government’s use of artificial intelligence. As the 2024 election kicked into gear, we saw a glimpse of how the rise of AI could affect the political sphere, New Hampshire voters receiving robocalls in January with a convincingly Biden-like voice telling them, inaccurately, that if they voted in the primary they wouldn’t be permitted to vote in November’s general election. 

We’ve been keeping our eyes on the rise of artificial intelligence — in September, for example, we reported on GameChanger, an AI-enabled software tool that helps spectators keep score at high school sporting events, then generates a prose story parents can send to grandparents and recruiters. In that story, we noted that AI is already disrupting media, with The Oregonian announcing that it uses generative AI tools for real estate listings And way back in February 2022, we covered the rise of wearable health tech devices, many of which use predictive AI tools to help users manage their health. 

Skip Newberry, Cass Dykeman and Rebekah Hanley at OB’s AI Rountable. This image served as a prompt for Open AI’s DALL-E 3, which generated both the magazine’s cover image and this article’s lead image.

But the tech has accelerated so quickly that it seemed like it was high time to talk about the big picture. And to do that, we called in some experts. In mid-December, I assembled a group of people who work with and study artificial intelligence to talk about where the tech is heading, what they’re concerned about and what makes them feel hopeful. 

What follows is a transcript of that conversation, edited for space and clarity. 

AI Roundtable | Here’s who was in the room:

Cass Dykeman is a professor of counseling at Oregon State University, and prior to that, he worked as an elementary and high school counselor in Seattle. His expertise includes the use of corpus linguistics, Bayesian statistics and artificial intelligence in research. Photo by Jason E. Kaplan served as prompt for DALL-E 3 image at right.
Charles Jennings, former founder and CEO of NeuralEye, an AI company specializing in recognition intelligence for computer vision, is the author of Artificial Intelligence: Rise of the Lightspeed Learners. He is currently board chair of Portland’s Swan Island Networks, a security intelligence company. Photo by Jason E. Kaplan served as prompt for DALL-E 3 image at right.
K S Venkatraman is senior director for artificial intelligence computing at NVIDIA Corporation, and an executive committee member of Oregon’s Workforce Talent Development Board. His teams develop products that enable technologies like self-driving cars, natural language processing and recommendation systems.  Photo by Jason E. Kaplan served as prompt for DALL-E 3 image at right.
Rebekah Hanley, a faculty member at the University of Oregon School of Law, teaches foundational lawyering skills, professional responsibility and advanced legal writing courses.  As Oregon Law’s current Galen Scholar in Legal Writing, Professor Hanley is studying generative AI and its implications for law school teaching and the practice of law. Photo by Jason E. Kaplan served as prompt for DALL-E 3 image at right.
Skip Newberry, president and CEO of the Technology Association of Oregon, co-facilitated the conversation. Photo by Jason E. Kaplan served as prompt for DALL-E 3 image at right.

Oregon Business: To start with, how do we define AI? I’m asking because it seems useful to make sure we’re all talking about the same thing, and also because, as a cynic, sometimes I see things marketed as AI that just don’t sound like what I understand AI to be. So how do you define it?

Charles Jennings: I define it as the art and science of teaching machines — of any kind — to learn. The difference between AI and everything that’s come before it is that AIs continue to learn and grow, and they feed on data in a completely unique way. And they are growing at a rate that most people don’t understand or comprehend.

K S Venkatraman at Oregon Business‘ Tigard office. Photo by Jason E. Kaplan

K S Venkatraman: Artificial intelligence has been around for more than 75 years starting with the Turing Machine.1 Since then, we’ve had machine learning, which is more learning from patterns of data to predict future behavior. Over the last decade or so, we’ve had this rise of accelerated computing coupled with the availability of large amounts of data. That combination has led to a field of AI called deep learning, which is really mimicking neural networks in our brain. Now we sit at the cusp of this exponential curve with generative AI, and that has led to computers being superhuman in vision and language domains, with many more things to come. Because of this explosion, every economic sector is scrambling to figure out how to incorporate AI into their businesses, so there is an enormous need for skill development — or at least learning how to use these AI models responsibly and basic foundational skills. 

[1 “Turing machines, first described by Alan Turing in Turing 1936–7, are simple abstract computational devices intended to help investigate the extent and limitations of what can be computed. Turing’s ‘automatic machines’, as he termed them in 1936, were specifically devised for the computing of real numbers. They were first named ‘Turing machines’ by Alonzo Church in a review of Turing’s paper (Church 1937). Today, they are considered to be one of the foundational models of computability and (theoretical) computer science.” Source: Stanford Encyclopedia of Philosophy]

OB: What should business leaders be considering as they’re thinking about, “How do I incorporate AI in what I do?” I realize that so much is going to vary depending on your sector. But what are the big things that people should be thinking about both in terms of the business case and the ethics? 

Jennings: I think 2024 will be a big year for the vertical industry AI. I think virtually every vertical from agriculture to zookeeping is going to have some kind of AI disruption. Two weeks ago, I was talking with the CFO of a big construction company. They’ve got a really sophisticated AI application for bidding on big construction projects. I think he said they had 10 human bidders in their company previously; they’re already down to seven. He said, “We’re never going to get to zero because there’s always the personal relationship, there’s the judgment.” They’re keeping the senior ones, and the junior ones are being eliminated. In terms of what companies should be thinking about, you’ve got to have safety and ethics very high on the list. For anybody who’s looking for guidance, I would recommend the new NIST2 framework. They’re doing a really good job of science and measurement and collaboration with the industry. I think you’ll see a lot of good guidance on security and ethics coming from NIST over the next year. 

[2 “The National Institute of Standards and Technology (NIST) is an agency of the United States Department of Commerce whose mission is to promote American innovation and industrial competitiveness. NIST’s activities are organized into physical science laboratory programs that include nanoscale science and technology, engineering, information technology, neutron research, material measurement, and physical measurement. From 1901 to 1988, the agency was named the National Bureau of Standards.” Source: Wikipedia]

Cass Dykeman at Oregon Business‘ Tigard office. Photo by Jason E. Kaplan

Cass Dykeman: My students ask me, “What can I write using LLM3 and what can’t I write?” I told my faculty, “I am not going to play AI Police. I’m just not going to do it.” I want them to learn how to effectively use it, so I’ll have them use an LLM to write a term paper, and then I’ll have them critique the paper. Now, the really smart ones use another LLM to critique the paper — which is fine. I’m also trying to work on an architecture that will come up with research questions, gather the data and write an article. Then I’m going to submit it to a journal — completely transparent about what I did — and see what the journal does in terms of reaction.

[3 “Large language models (LLMs) are machine learning models that can comprehend and generate human language text. They work by analyzing massive data sets of language.” Source: Cloudflare]

Rebekah Hanley: It’s a time of shifting norms, and I think there’s a lot of diversity of thought in the university — and K-12 as well — about what’s appropriate in terms of reliance on generative AI for research, for writing, for editing. We’re empowering and encouraging students to ask a lot of questions, to clarify with whichever instructors overseeing a particular project: What is allowed? What is expected? I do think that there’s a real risk of well-intentioned people getting sideways in terms of academic integrity questions. There’s just a good-faith disagreement about whether what they did is consistent or not consistent with course policies or university policies relating to independent work product creation, plagiarism. I think in terms of citation, we’re going to see, and probably already are seeing, a lot of disagreements about what’s appropriate, what’s inappropriate, what’s cheating.

OB: I was just thinking about how not that long ago, teachers would have said that using spell-check or grammar check is cheating — because you’re supposed to learn how to spell and you’re supposed to learn the rules of grammar. Now I don’t think anybody would say that. I say to writers, “If you haven’t spell-checked your story before you turn it in, you’re not done with it.” I wonder if the way we think about plagiarism is going to change in the coming years because we have these tools.

Venkatraman: They said the same thing about the calculator. I think that repetitive nature of things that we do is getting increasingly automated, and so that frees us up for some higher-level cognitive thinking and some multistep reasoning, which these models are not capable of today. It’s a good copilot to have.

Dykeman: I’m a reviewer for a number of journals, and I’m getting emails from the journals saying, “Do not use LLMs to review your articles.” Even the artificial intelligence journals are saying this. But it’s going to be a point very quickly — we’re there, right? — where an LLM can write an academic article and an LLM can review an academic article. So where are we and what are the implications?

Jennings: As a writer, I agree, use the tools to teach people how to write a good prompt and follow a good thread, but we also have to keep that skill of writing. Like in math, we don’t want people not being able to write out a quadratic equation, do we? They have to be able to at least learn the skill and then take advantage of the machine. I think all the AI stuff around us just raises the issue of what does it mean to be human? We’re going to have to discover that in new ways. Part of it is going back to some things like storytelling; that is a uniquely human skill that’s not going away. No jury wants to see an AI spout off about something. And emotion and connection —these other uniquely human qualities that I think are going to become more valuable, more treasured over time.

Venkatraman: At the Oregon Talent Summit4, I was on a panel discussion where they were asking, “What are the skills that will be required in the future?” My emphasis was entirely on these human skills: things like collaborative problem solving, complex problem solving or critical thinking, and empathy and communication. The most important skill, in my view, was the self-critic aspect of things: How do you learn from your mistakes and constantly improve? I think we need to build a foundation in these skills. That’s one thing that we can do, because these are not skills that can be automated.

[4 The 2022 Oregon Talent Summit convened a broad array of thought leaders to discuss the implementation of former Gov. Kate Brown’s Future Ready Oregon package of investments, which included a $200 million investment to foster a diverse, skilled workforce.]

Skip Newberry: Do you think there’s a healthy dose of skepticism that needs to be taught better, or more so — in whether it’s K through 12, higher ed —  as it relates to understanding some of how the systems work and where they can fall short? I think that otherwise, you end up in a situation where everyone ends up reinforcing this kind of common standard over time. You lose some of the creativity opportunities over time if people just fall into what’s easy, rather than being able to critique both the underlying tools as well as what they are producing.

Hanley: I believe that people need foundational knowledge in addition to the skill of using an LLM. They need the knowledge also to critique and improve upon whatever is produced by this automated system. There’s a real risk of sort of homogenization — of losing not just unique humanity but diversity of perspective and voice and thought. We are going to lose a lot of ideas and voices if we rely on AI too heavily.

Charles Jennings at Oregon Business‘ Tigard office. Photo by Jason E. Kaplan

Jennings: One of the things I learned working with AIs is that they lie. There’s no moral compunction to tell the truth. They’re like my little dog: They want to please, so they’ll tell you something, even if it’s not quite true. We need to develop this hypercritical look at the information we see. We see it in politics. We see it in society. There’s a lot of misinformation, disinformation and misdirection coming at us, right? How do we filter all that? Well, hopefully we’ll be able to get some help from AIs as well. But humans have to guide that, I think.

Venkatraman: There is this reinforcement learning with human feedback that’s important. These models are too new. They’re wrong, but they’re confidently wrong. You have to tell them that they’re wrong and help improve them. To your point about skepticism, I think we have to be careful about people overestimating AI’s capabilities. There is a lot of hype about AI being conscious and solving all the world’s problems, and it’s just not there yet. There is also the idea that it will be the end of humanity; I think those fears are just overblown.

OB: My concern is not that the AIs are going to get too good but that managers are going to say, “This is good enough” and cut jobs. We’re already seeing that in my industry. Although, to your point about skepticism, there was a big scandal with Sports Illustrated using a bunch of AI-generated articles written by fake writers5; they created fake profiles for writers, but because other journalists caught them, those articles were taken down, and I believe the editor who commissioned them was fired. 

[5 In November, the website Futurism reported on a series of articles on Sports Illustrated’s website that appeared to be AI-written, with fake bylines and AI-generated writer photos. The articles and writer profiles were subsequently deleted and parent company Arena Group fired SI’s CEO in December. In mid-January Arena Group announced that it had lost its license to publish the magazine and laid off most of its staff.]

Venkatraman: I feel lucky, in my generation, to have witnessed the internet revolution and the mobile revolution. This one seems bigger than both of them combined. When you talk about job displacement, if you look at just what happened with the internet and the mobile revolutions, it — in the aggregate — created a lot more jobs than it displaced. I think the generative AI revolution is going to create a lot more jobs, but yes, there will be displacement. A law firm may not need to hire 100 paralegals when it can just do with 10 — but then they would be a lot more productive because they know how to use the tools. That’s why it’s so important to teach everybody how to use AI models. One thing I keep harping on is that we really need to make a computer science course that has the foundational basics of AI, or using AI as part of a core graduating requirement at K through 12. I’d really love to make that happen; I just don’t know how. 

Dykeman: The hunger is out there. When I’m talking to K-through-12 teachers, they want to know about how to use it, they want to know turnaround and how to teach it, too. Where I start is the basics of prompt engineers, just in teaching them all the chain-of-thought reasoning and how to do that, and those basics and how to turn around and train that. Now, at some point, the LLMs will probably do all that prompt engineering for us. That training needs to happen at the workforce-development level, graduate level, all the way down to kindergarten. 

Jennings: I get asked often by people who say, “Should I have my child study coding in college?” And I say, “Well, maybe there are some other options.” In my book, which was written in 2018, I said, “Don’t let your children grow up to be radiologists, because AIs are going to be much better at image recognition.” You know what? I was completely wrong. Stanford did a great study of the 28 qualities it takes to be a great radiologist. Two of them involve visual recognition of anomalies in a scan. The others are all dealing with other doctors, understanding insurance — they’re all human things, like dealing with patients. No one wants to be told by an AI that they have cancer. But there are other places where I think you’re going to see a massive displacement. It’s really hard to know in advance which ones they are.

Dykeman: There’s a preprint6 that just came out a couple days ago that looked at AI patents and asked how they might impact the job market. You normally hear that coding and those types of things are going to go away, but all of the trades will stay. What they found was that some of the trades are going to prosper, and some of the trades are going to disappear. Across every industry, every spectrum of work, some things are going to be impacted and some things are not. 

[6 The Impact of AI Innovations on U.S. Occupations
Ali Akbar Septiandri, Marios Constantinides, Daniele Quercia
“…Our methodology relies on a comprehensive dataset of 19,498 task descriptions and quantifies AI’s impact through analysis of 12,984 AI patents filed with the United States Patent and Trademark Office (USPTO) between 2015 and 2020. Our observations reveal that the impact of AI on occupations defies simplistic categorizations based on task complexity, challenging the conventional belief that the dichotomy between basic and advanced skills alone explains the effects of AI. Instead, the impact is intricately linked to specific skills, whether basic or advanced, associated with particular tasks. For instance, while basic skills like scanning items may be affected, others like cooking may not….” Source: https://arxiv.org/abs/2312.04714]

Venkatraman: With basic coding, I think you’re right that language models can take care of that. But it’s more important to teach students how to think algorithmically to solve problems independent of the syntax of the language. That higher-level cognitive thinking and the problem solving, that’s still important.

Rebekah Hanley at Oregon Business‘ Tigard office. Photo by Jason E. Kaplan

Hanley: When it comes to needing to overhaul education, educators at the K-12 level — and in higher ed — are already burned out and exhausted. There is some real overwhelm in thinking about all that the generative-AI revolution indicates. I worry that the project is too big and also constantly shifting. There’s no moment in which we can all take a breath and say, “OK, now I understand, I see the landscape, I understand what I have to do.” Because every day something has changed, and the thinking evolves, and the needs shift. 

Venkatraman: I was talking to somebody from the Department of Education, saying how important it is for us to revamp our curriculum. And the answer I got was, “We do review K-through-12 curriculum, but only once every seven years, and then we usually decide not to do anything about it.” But seven years is a whole generation when it comes to technology advancement. So you have to have a process in place, more than doing a one-off.

OB: What should policymakers be thinking about in terms of workforce, in terms of ethics, in terms of education policy?

Jennings: When I was a fellow at the Atlantic Council, I had a congressman tell me, “AI is just another fad, like CB radio.” And curriculum reform is fast compared to some congressional processes. It’s my hypothesis that Congress cannot regulate AI. It’d be a rowboat chasing a JetSki. What we need is a model that we’ve already tested, that is highlighted in the Oppenheimer film, which is the Atomic Energy Commission7 as set up by Truman. It’s part of government, but it’s outside of politics, and it’s staffed by people who are A) technical experts and B) absolutely devoid of conflict of interest. It didn’t just regulate, it researched. The government invested a huge amount in looking at this technology, which was also a potential threat to the world and also potentially had great service for medicine and energy and other things. AI is the same way in my view, and we need a new entity, an AI Commission. I think they need to do the red teaming to completely outside of Big Tech so that we don’t just leave that incredibly important safety function to the technology people themselves, because if we did, it’d be like asking Big Oil to set our pollution guidelines. 

[7 The Atomic Energy Commission was created in 1946 to manage the development, use, and control of atomic (nuclear) energy for military and civilian applications. The AEC was subsequently abolished by the Energy Reorganization Act of 1974 and succeeded by the Energy Research and Development Administration (now part of the U.S. Department of Energy) and the U.S. Nuclear Regulatory Commission. Source: NRC.gov]

Venkatraman: I think policymakers need to develop frameworks that can look at the fairness of AI systems, you know, particularly around hiring and criminal justice and loan approvals. Because that can lead to problems down the road. The second thing I’d say is transparency in the input to these models, the data that’s being used to train those models. The models are only as good as the data that’s used to train them. 

Hanley: We need money for research, money for training. We need to be thinking not just about risks and managing those risks, but also about capitalizing on the opportunities. I do think opportunities are there to address gaps in education, in justice, in access to information. There are real possibilities we can all enjoy because of these enhancements, and also risks. I’m definitely worried about the bias that’s built into the system being perpetuated, and predictive policing is one area where that really is very scary. 

Venkatraman: Data privacy is the other thing I’m thinking about. Individuals and users need to have control of their data; data is the new oil. They need mechanisms to be able to opt in and opt out of data, and also to delete data that’s attributed to them. 

Jennings: The other thing I think we need to think about is China. China has a very active, aggressive policy on AI. They’ve got a leader who probably understands it better than any current political leader around, and they’re very, very overt about saying they want to be the leader of AI in the world.8 Any policy that we have — either national policy or even standard industry strategy — has to include a look at and understanding of what China is up to with AI.

[8 “The United States leads China in innovative national security technology and industrial might, but Beijing is rushing ahead in areas like artificial intelligence, where it feels it can be a global leader in the next decade, the Center for Strategic and Budgetary Assessment’s latest report concludes. …The report adds that China’s ‘Achilles Heel’ in the competition is its ‘do-it-or-else’ systems of governmental penalties placed on industry, while the United States’ system is more open to market forces in fostering useful innovation. The United States also retains an edge in developing technologies that fit well in the civilian and military sectors, while China’s ‘structural statist bias’ — the report’s term — will likely hinder progress.” Source: USNI News]

Venkatraman: As a blanket statement, I think cybersecurity measures are essential to protect data and unauthorized access to that data, because it can be misused. I just don’t know how. But I think it’s needed.

Newberry: Is there one area that each of you is optimistic about with AI applications in terms of what’s possible?

Hanley: I am optimistic about how this technology in my discipline will open up lawyers specifically to focusing more of their time and attention on the uniquely human aspects of lawyering: talking to clients, brainstorming creative ideas about how to present a case to a decision-maker, whether that’s a judge or a jury, opposing counsel. You know, how to persuade someone face-to-face, in person, how to frame a question, and how to engage with a human being who’s relying on you in a moment of vulnerability and challenge in their life — and delegate to technology some of the more tedious, time-consuming tasks that tend to burn lawyers out and make them unhappy with their profession, so they find more satisfaction and extend their reach to more people who need legal support. So I’m optimistic about that — cautiously optimistic.

Dykeman: In my field of education, I’m super excited about the ability of LMS-assisted,9 individualizing instruction. 

[9 Learning Management Systems (LMS) are platforms with AI algorithms that can “analyze the performance and preferences of individual learners,” tailoring instruction accordingly and enhancing the efficiency and effectiveness of instruction.  Source: Medium.com]

Venkatraman: If I look back at automation, I felt like it helped ease the burden on manual labor. Now with generative AI, it’s helping ease the burden of just repetitive and mundane tasks in white-collar work. My optimism stems from the fact that anything that’s repetitive and mundane in nature, we can use tools to automate that. So that frees us up to focus on things that matter, like human relationships, or solving very complex problems that computers cannot solve today. That’s what keeps me going.

Jennings: I’m very excited about AI’s exploration of space. We have an AI navigating on Mars10 right now in the Perseverence rover. There are really aggressive AI programs from NASA and other private programs. I think, in my grandson’s generation, he’s going to see the universe opened and explored in ways that we can’t even fathom. That’s my Star Trek.

[10 “Members of the public can now help teach an artificial intelligence algorithm to recognize scientific features in images taken by NASA’s Perseverance rover.” Source: nasa.gov/solar-system/you-can-help-train-nasas-rovers-to-better-explore-mars/]


Click here to subscribe to Oregon Business.