How AI is Enhancing Human-Driven Decisions

Advances are sparking a revolution in innovation and supercharging humanity’s ability to solve problems once thought to be unsolvable.

When the U.S. Senate recently convened tech giants Mark Zuckerberg, Elon Musk, Bill Gates, and dozens of other industry leaders for a series of private summits on artificial intelligence, Americans witnessed a rare sense of collective urgency on Capitol Hill. AI is evolving at an astonishing pace — so the typically slow legislative process responded with unusually fast bipartisan efforts intent on building guardrails around AI’s use.

AI is transforming virtually every facet of modern life, from banking to medical diagnostics to smart home devices. It’s sparking a revolution in innovation and supercharging humanity’s ability to solve problems once thought to be unsolvable.

At the same time, AI’s potential misuse — not only by bad actors but also by good intentions not fully vetted — raises complicated questions about privacy and ethics, economics, and national security.

While AI is a powerful technology with an impressive resumé, it is not a replacement for human intelligence and understanding. Humans make nuanced decisions by applying critical thinking and employing complex emotions. Yet by harnessing AI as an innovative tool, humans can ignite their imaginations, inspire new solutions to old problems, and ultimately build a better world.

The Tepper School of Business faculty are putting this belief into practice; they are at the forefront of leveraging AI to help solve complex business problems. Collaborating with colleagues across the university, they focus on productivity and efficiency, human-AI interaction on teams, and what AI’s role in decision-making means for fairness and bias.

In the classroom, Tepper School faculty prepare students for AI-related challenges they’ll face as ethical business leaders in the new world, such as how to hold the reins on a technology that shows no sign of slowing down.

Untapped Data

Headshot of R. Ravi, the Andris A. Zoltners Professor of Business; Professor of Operations Research and Computer Science; Director of Analytics Strategy
R. Ravi, the Andris A. Zoltners Professor of Business; Professor of Operations Research and Computer Science; Director of Analytics Strategy

For decades, corporations have invested billions of dollars in computer systems that amassed gigantic amounts of data. Now, with useful AI tools emerging every day — technology that can automate processes or personalize content, for example — companies are starting to comb through that data, gaining new perspectives of their operations, marketing, and finance, observed R. Ravi, the Andris A. Zoltners Professor of Business and Professor of Operations Research and Computer Science and the Director of Analytics Strategy at the Tepper School of Business.

Ravi calls this evolution “Management Science 2.0.” In the Tepper School’s new Center for Intelligent Business, companies like Home Depot and Adobe learn how to get the most productive information out of their data.

“We want to find ways of deploying AI that make human managers and executives more intelligent,” he said.

The center brings together faculty members from different functional areas of business, such as marketing, finance, operations, and communication, said Professor of Economics Laurence Ales. They work on their own research, collaborate, and prepare briefs on popular current topics.

Headshot of Laurence Ales, Professor of Economics
Laurence Ales, Professor of Economics

“Each gives their own perspective on what is happening, what might happen, or what might not happen in this space,” Ales said.

Ravi invites companies to meet with Tepper School faculty and Ph.D. students to discuss the problems executives are hoping to solve, and perhaps identify a portion of the problem for deeper research. The center offers expertise in multiple ways: For example, a student team can tackle the problem as part of a capstone project, or the company might fund a faculty member for a summer month and a Ph.D. student for a year, to work on a solution.

Glance, an Indian AI-based software company that personalizes dynamic phone screen content, came to the center for help customizing material for 250 million users. Former Tepper Ph.D. student Su Jia, Ravi, and four other collaborators created an algorithm for recommending and serving Glance’s content to users. Working closely with the company, the team tested the new program on a small percentage of the company’s traffic. The result? Glance saw a healthy lift in all of their most important performance metrics.

In addition, the team’s research was part of a doctoral thesis that won the prestigious George B. Dantzig Award, given for the best dissertation in any area of operations research and the management sciences that is innovative and relevant to practice, in 2022.

Thinking Differently

Benjamin Moseley, the Carnegie Bosch Associate Professor of Operations Research, is a mathematician with a background in computer science. He studies how to take the insights gained from machine learning and incorporate them into business decision-making. Specifically, he seeks to speed up computational methods. Emerging from his research for the first time is a generally applicable way to leverage information from the past to solve future problems, called predictive flow.

Headshot of Benjamin Moseley, Carnegie Bosch Associate Professor of Operations Research
Benjamin Moseley, Carnegie Bosch Associate Professor of Operations Research

“What we teach in computer science is that when you get a new problem, you assume it’s the first time you’ve ever solved it. In practice, even in large computational problems, we tend to solve them repeatedly,” he said.

Predictive flow is especially useful when applied to networks. Last-mile package delivery for USPS, Amazon, and FedEx is an extreme example of this type of computational challenge and a problem these companies face every day. Fortunately, Moseley and his colleagues built an algorithm that enables them to map out more efficient delivery routes which lead to improved cost-effectiveness.

Moseley’s research has been recognized among the top machine learning papers at Neural Information Processing Systems (NeurIPS) conferences. His dream is to develop a meta-method that takes all the past ways of solving these puzzles and automatically scales it to handle large data sets.

“We’re going back to the fundamentals of computer science, rethinking the way we do computation,” he said.

Bias in Lending

Some industries are using machine learning algorithms in place of humans to make important decisions. Not surprisingly, a key question is whether machines can improve upon humans’ decisions.

Yan Huang, Associate Professor of Business Technologies, is helping financial institutions rethink their lending practices. A recipient of an Amazon Research Award for her work on algorithmic fairness, she explores the potential for AI to reduce inherent human bias.

Huang co-authored a paper that uncovered the source, dynamics, and impacts of bias in microloan granting. As its name suggests, belief-based bias is rooted in past beliefs. For example, an applicant’s default risk is perceived to be higher (or lower) based on overall averages of how their demographic group has performed in the past. Preference-based bias stems from inherent animus toward the group, leading to lower scores for all members of that group.

Huang measured the effects of both types of bias in human decisions and showed that eliminating either one improves fairness in resource allocation and boosts profits for the lender. Then, the researchers trained machine learning algorithms to predict the default risk, using real-world data with human biases encoded and counterfactual data where biases were removed.

Headshot of Yan Huang, Associate Professor of Business Technologies
Associate Professor of Business Technologies

The research revealed that fairness-unaware algorithms can reduce bias in human loan-granting decisions, even though bias is present in machine learning algorithms. Removing both types of human bias from the training data further improved machine learning fairness.

Huang said that concerns about algorithmic bias can make executives question the value of using AI at all. However, even biased algorithms may improve decision fairness compared with the status quo in certain contexts. Because AI applications are still in their infancy, she calls for a balanced view of the risks and benefits — and continued testing to get it right.

“I’m not saying that if the algorithm is doing a little better than the status quo we should be satisfied with that. We still want to continue with a lot of technical and social science research,” she said. “To ensure fairness, we can also have, for example, robust testing of the validation process before applying the system on a large scale, and have a development and testing team of diverse individuals.”

The Next Generation

Headshot of Derek Leben, Associate Teaching Professor of Business Ethics
Derek Leben, Associate Teaching Professor of Business Ethics

In classwork, research, and real-world projects, Tepper School students are preparing for scenarios we can’t yet imagine as AI technology rapidly unspools. Some will dive into technical product development, with a hand in decisions that could impact millions of people. Others will leverage AI in operations, marketing, and finance roles, or even support its development.

Derek Leben, Associate Teaching Professor of Business Ethics and a classically trained ethicist, explores the ethical issues of products and services. In the course “Ethics and AI,” MBA and MSBA students think about policies companies need to create around the use of these products.

They ask important questions such as: what contexts for facial recognition are acceptable and which cross an ethical line? What risk-reducing limits should we put on where our autonomous vehicle is deployed? Which kinds of data are just off limits, even with patient or consumer consent?

“It’s always important to ask, ‘Where are the red lines that we will not cross and don’t think other companies should cross as well?’” Leben said.

With a syllabus that Leben explains “changes every year because the field moves so fast,” the class studies real-life scenarios that show how a policy alone isn’t enough. When companies publicly declare a given policy, they need to seriously consider the criticism that follows — because it will. The goal isn’t a marketing win, he said: “It’s from the perspective of ethics, that what we’re doing is genuinely careful, respectful, and well-thought-out.”

When IBM rolled out its supercomputer Watson as a diagnostic tool for medical professionals in the 2010s, company executives overestimated its capabilities. Leben’s class studied the case as a lesson in hubris.

“[IBM was] accused of safety and liability issues, and justifiably so,” Leben said. “These automated systems make high-risk decisions about people’s health, jobs, and loans — even, in the case of criminal justice, about whether somebody is going to get bail or parole. These are the kinds of products and services my course focuses on.”

The course covers a lot of thorny territory: privacy and consent, data ownership, explainability, discrimination, fairness, safety, and liability.

“There’s a long history of ethics and law around these products and services. What we talk about in the class is how these old ideas get transformed into new applications,” Leben said.

Human-AI Interaction

Headshot of Anita Williams Woolley, Associate Dean, Research; Professor of Organizational Behavior and Theory
Anita Williams Woolley, Associate Dean, Research; Professor of Organizational Behavior and Theory

The teams building these products and services increasingly use AI technology to communicate and work together in a more integrated way, going beyond simple communication and scheduling applications, said Anita Williams Woolley, Professor of Organizational Behavior and Theory and Associate Dean of Research.

A social psychologist, Woolley works with colleagues in computer science and robotics to find the right balance of interaction between humans and AI agents. They research what signals AI can use to capture the quality of team collaboration, and how AI can be a good collaborator and increase human-machine collective intelligence.

Right now, many are worried about the ways technology might take over their jobs. “In reality, the most powerful opportunities to leverage AI involve finding synergies with human intelligence. But that can be tricky, if people are worried about their privacy, or about getting fired,” Woolley said. “However, we have the opportunity to rethink how work is structured and to use AI to build in the types of task variety, autonomy, and feedback that fuel human motivation. As we do so, we can more easily find ways to leverage the technology to enhance the efficiency and quality of work products.”

“Everybody’s all whipped into a frenzy about ChatGPT and large language models and so on, which are really exciting. But there is still an enormous role for human-machine collaboration; we still have a lot to learn about artificial social intelligence, which will be an essential hurdle to clear before machines can really work autonomously,” she said.

The Future of Work

Ales, an expert on the impact of technology on skills demand, studies the future of work and how AI might shape it. In a course called “Technology and the Future of Work,” Ales tells students that knowing how technology is changing the way work gets done will be an essential tool for creating business processes. Hence, his students work with data on skills and abilities from the Department of Labor and other sources. This data can reveal insights into the impact of technology, not only on workers but also on products and processes.

In a study last year, Ales and engineering colleagues compared automated and non-automated semiconductor factories to see whether technology changes affect the demand for skills in an occupation. They collected an enormous amount of data on the skill, training, education, and experience requirements of every step in a manufacturing process. Ales’ paper is one of the first to directly map different technological changes onto labor outcomes using an engineering process model.

“We teach students to be very precise on statements that involve technology and give a basic framework to think about it in a simple way,” he said. “You can have a glimpse into how production processes can be transformed in a more complex way and thus how technologies can augment or substitute workers.”

The Future Unknown

Growth is the most powerful economic force, and a big driver of that growth is technology, Ales said. With applications in all areas of business, AI is clearly poised to play a central role.

“Technological changes are happening at a faster rate. If your father was an MBA student and your grandfather was an MBA student, yes, technology was important for them, but changes were happening much slower. Now things are happening much faster,” he said.

So how do we keep up? Ales said the question was in the air at an economics conference he recently attended in Boston.

“One of the things that was floating around was, maybe there should be an intentional slowing down from policymakers,” he said. “And I will add, business leaders also have a responsibility in this before deploying something simply because it can be done.”

Ravi said the moment we’re in now, with AI technology, looks like the decades before widespread electrification. We’re reimagining traditional management science problems and how to use AI to solve them.

“We are in that phase now with AI. People have figured out what this can do. But we really haven’t figured out how to fit it into all the things that we do in real life,” he said. “That’s the huge transformative potential of AI, and that’s what we’re reimagining here.”

More Information