The University of Massachusetts Amherst
University of Massachusetts Amherst

Search Google Appliance

Links

Faculty Profile: Q&A with Bruno Castro da Silva

For Bruno Castro da Silva, joining College of Information and Computer Sciences  as an assistant professor this year was a homecoming of sorts. Castro da Silva received his doctorate in computer science from UMass Amherst in 2014, studying under Professor Andrew Barto. After graduating from UMass, he completed a postdoctoral appointment at the MIT’s Aerospace Controls Laboratory and served as an associate professor at the Federal University of Rio Grande do Sul in Brazil. He is pleased to return to UMass to co-direct the Autonomous Learning Lab, where reinforcement learning was invented 30 years ago by Barto and CICS alumnus Richard Sutton ‘84PhD.

 

What drew you to CICS?

My main research area is called reinforcement learning, and I am thrilled to be able to co-direct the lab where all of this started. Besides that, the college has an amazing group of world-class professors doing research that I find very interesting, and there are plenty of opportunities to collaborate on what I think are very important problems, especially scaling up AI so that it can help us with real-world problems.

 

What are some examples of practical applications of reinforcement learning?

So, reinforcement learning is a branch of AI that is focused on learning to make decisions based on interactions of a robot or a system with its environment, in order to solve some given problem. These learning algorithms have been deployed in a wide range of really interesting, real-life problems, from creating personalized recommendations in digital marketing, helping with household chores, achieving superhuman performance in video games, developing our understanding of how our brains work, and performing package delivery via drones. 

 

What work have you done recently in the field?

One of the problems that I'm interested in is designing machine learning algorithms that behave safely— and safety here means more than just avoiding physical harm to humans. A machine learning algorithm is safe, for example, if it does not produce undesirable behaviors, like racist or sexist behaviors. In collaboration with Philip Thomas, I co-authored a paper published in Science in 2019, where we introduced a general way of designing machine learning algorithms that do not produce undesirable behavior. For example, we show how to construct a model that is tasked with predicting the aptitude of students, which has a high probability of not discriminating against minority groups. 

 

Why is this kind of research important?

Safety in algorithms is a super relevant topic, in particular as machine learning algorithms are increasingly utilized in real-life applications to make predictions. And, these algorithms are not always used correctly. For example, algorithms which incorrectly indicate that African Americans are twice as likely to commit crimes in the future compared to white people. So, these concerns about safety are not something that might happen hypothetically in the future; these problems are already happening. 

 

What do you like doing when you are not working?

I'm a big fan of horror movies and I love to play the guitar. I also really enjoy the outdoors, especially here because of the natural beauty of the Valley, which I know well, because as I said, I was a grad student here. I'm looking forward to the fall semester when the university is going to be operating normally, and I'll get the chance to once again, enjoy this charming region and the nearby towns.