September 27, 2021
-
5
minute read

Blaine Nelson: Using his Adversarial Machine Learning Research to improve RIME

People
“One of the beauties of machine learning is that you’re not necessarily reaching a true optimum, but rather, with every step you’re improving on what you had in the past. It’s constantly pushing you to think harder and be a better problem solver, and that’s what I like.”

For this week’s fireside chat, we sat down with our Principal Machine Learning Engineer, Blaine Nelson. Blaine graduated with his BS in Computer Science from the University of South Carolina in 2003, and his MS and PhD from the University of California, Berkley in 2005 and 2010, respectively. At Berkeley, Blaine focused on research in adversarial machine learning, and then performed subsequent post-doctoral research at Universität Tübingen and Potsdam. 

After his postdoc work, Blaine worked for six years at Google, building machine learning infrastructure and technologies for counter-abuse. We were lucky enough to add him to the Robust Intelligence team in April 2021.


What made you interested in machine learning/artificial intelligence?

People who are passionate about their field of study are natural-born storytellers, and it was captivating to listen to Blaine talk about what brings him to machine learning.

“I think what made me most interested in AI was the fact that there are problems that kind of go outside of what can be solved purely algorithmically – problems which you don’t necessarily need the correct solution to, but you need to have a reasonable solution, which can be approximately good enough – this is the space in which AI/machine learning kind of falls,” Blaine told us. 

For instance, if you want to find the shortest path between two cities, that’s an algorithmic problem, and something we know how to solve efficiently. However, there is a whole range of problems that simply cannot be solved efficiently – or at least, we don’t have a way of doing so at this time.

“There’s a lot of alchemy in machine learning and artificial intelligence, in the sense that there are scientific methods that go into the machinery of learning, but in many cases it’s not really fully understood how good your final solution is relative to your goal.”


“That’s kind of what got me interested in the whole machine learning space,” Blaine explained. “The challenge of creating a machine that learns how to solve a problem without being given all the steps, so that it can either approximate or iteratively solve in a way better than how we already can.”

How did your background prepare you for working at Robust Intelligence?

Blaine is one of the first researchers in the world to work on adversarial machine learning, and has written a book about it. Of course, this never came up directly in conversation. As a true expert, Blaine talked more about the material than about his own accomplishments, impressive as they may be – he spoke so fluidly and easily about his field that, while listening, we almost felt as if we ourselves had been pioneers in securing the cybersecurity arms race.

There were many small things along the way that shaped Blaine’s passion for the topic. The first time he encountered machine learning was on a summer internship project at Duke in 2002. For the project, Blaine and his fellow students were interested in landmine detection, using various inference techniques to look for electrical signals that detect whether a landmine is present or not. 

It was one of his first experiences considering the productivity of using neural networks to solve a problem. “It didn’t work very well,” he said, laughing. “But the neural network was fun to play with, so I started getting obsessed with machine learning around then.”

A pivotal point for Blaine came in grad school, in his second year. “I had been working on a machine learning project with a couple of colleagues, which was concerned with antivirus detection. To be honest it was a very naive project, using basic statistics from email to detect whether an email had a virus or not – it was pretty clear that you could break the thing pretty easily.”

At the time, Blaine was also taking a class in computer security and machine learning. One of his peers had this idea to do a project that spanned both areas. “Basically, the idea was to imagine, and come up with a framework for determining, how secure a machine learning algorithm is.”  

Blaine wrote the project up with his colleagues, and it eventually became a paper entitled “Can Machine Learning Be Secure?” published in 2006 with Marco Barreno, Russell Sears, Anthony Joseph, and J.D. Tygar. The concepts at the backbone of the paper ended up becoming the core of Blaine’s research for the remainder of grad school and his doctoral research.

For readers familiar with Robust Intelligence, this paper – and Blaine’s subsequent research – clearly foreshadows the project of RI. At the time of Blaine’s studies, there was not a lot of academic research about security in machine learning, and Blaine and his colleagues helped to lay the foundations for the entire field generally. 

Today, securing machine learning against operational error and adversarial attack is the entire mainstay of Robust Intelligence. We are very fortunate to have Blaine as one of our leads and project managers; as one of the world’s foremost researchers, there’s no better person to help RI lead the AI/ML cybersecurity revolution.


What’s a day in the life as an engineer for Robust Intelligence look like?

Although Blaine’s official job title is Principal Machine Learning Engineer, he’s currently working on backend engineering. Right now, his role mostly entails ensuring that the company can deliver machine learning technology to its customers within a production environment. In other words, Blaine’s background in cutting-edge research is now being connected to its application in a commercial context, helping Robust Intelligence’s customers secure their AI/ML pipelines against operational risk.

“It’s a little difficult to give a typical day, because the days tend to diverge – in a good way,” Blaine said. “My mornings are spent collecting thoughts, figuring out what we’re doing next. Then I spend around half the day trying to write up various improvements, whether those be coding, documents, et cetera.”

The rest of Blaine’s day is spent in meetings, just talking through things with other engineers and team members at Robust Intelligence. As a high-growth startup, much of what happens at RI on a day-to-day basis is taking stock and planning for the next major project or company development. It’s an exciting atmosphere of change and acceleration, and Blaine helps to lead that momentum for the machine learning and backend teams.


What do you find the most exciting about working at Robust Intelligence? 

Blaine is a problem-solver at heart – a devout New York Times puzzle enthusiast and an intrepid hobbyist lockpicker. When we asked Blaine what he finds the most satisfying about his work at Robust Intelligence, he spoke about the simple joy he gets from just sitting down and having an algorithmic problem to solve. 

“That’s always been my biggest joy - how do I take a problem and find a way to solve it using the tools that I’ve learned over the last two decades.”

To me, there’s nothing more enjoyable than trying to take a general problem in the world and map it out into a sequence of steps that can be solved efficiently, or at least solved in a reasonable way by a machine. 


Why did you decide to join Robust Intelligence? What brought you to the job?

As one of the most successful pioneering researchers in the adversarial machine learning space, Blaine’s academic pedigree speaks for itself – but he’s interested in shaping the future of machine learning within the business world.

When we asked Blaine what drew him to Robust Intelligence specifically, he spoke about how he thinks the company serves to address what he calls an “overfitting” problem within the machine learning world currently.

As Blaine explained it, machine learning today is very effective at handling a lot of data. However, in his opinion, one of the flaws of the field is the fact that a lot of machine learning operates through a sort of “alchemy.” Give a ML algorithm data, and it will get better –but it’s hard to determine whether the system got authentically “better” as we expected it to, or if it just got better compared with a previous state.  Worse yet it’s often difficult or impossible for us to understand what learnings were made or where the ML model has potential vulnerabilities.

“This is the kind of thing that I think has been missing in the entire area of machine learning, and needs to be improved - and by having this issue of correcting “overfitting” as their focus at Robust Intelligence, as opposed to a secondary goal, is what makes Robust Intelligence special.”

“We’re over-reliant on machine learning technologies as a type of magic often without considering its limitations,” Blaine explained. As a company, Robust Intelligence seeks to address the overfitting issue and deconstruct some of the mysticism that underlies the black box of ML, tackling issues related to data insecurity and operational error.

To Blaine, one of the biggest draws to Robust Intelligence is helping people, companies, and systems identify where they have problems in their machine learning. He’s a puzzle-solver – and Robust Intelligence allows him to bring his love for puzzles into the work world, disentangling the errors in AI/ML pipelines of small startups through Fortune 500 giants.

September 27, 2021
-
5
minute read

Blaine Nelson: Using his Adversarial Machine Learning Research to improve RIME

People
“One of the beauties of machine learning is that you’re not necessarily reaching a true optimum, but rather, with every step you’re improving on what you had in the past. It’s constantly pushing you to think harder and be a better problem solver, and that’s what I like.”

For this week’s fireside chat, we sat down with our Principal Machine Learning Engineer, Blaine Nelson. Blaine graduated with his BS in Computer Science from the University of South Carolina in 2003, and his MS and PhD from the University of California, Berkley in 2005 and 2010, respectively. At Berkeley, Blaine focused on research in adversarial machine learning, and then performed subsequent post-doctoral research at Universität Tübingen and Potsdam. 

After his postdoc work, Blaine worked for six years at Google, building machine learning infrastructure and technologies for counter-abuse. We were lucky enough to add him to the Robust Intelligence team in April 2021.


What made you interested in machine learning/artificial intelligence?

People who are passionate about their field of study are natural-born storytellers, and it was captivating to listen to Blaine talk about what brings him to machine learning.

“I think what made me most interested in AI was the fact that there are problems that kind of go outside of what can be solved purely algorithmically – problems which you don’t necessarily need the correct solution to, but you need to have a reasonable solution, which can be approximately good enough – this is the space in which AI/machine learning kind of falls,” Blaine told us. 

For instance, if you want to find the shortest path between two cities, that’s an algorithmic problem, and something we know how to solve efficiently. However, there is a whole range of problems that simply cannot be solved efficiently – or at least, we don’t have a way of doing so at this time.

“There’s a lot of alchemy in machine learning and artificial intelligence, in the sense that there are scientific methods that go into the machinery of learning, but in many cases it’s not really fully understood how good your final solution is relative to your goal.”


“That’s kind of what got me interested in the whole machine learning space,” Blaine explained. “The challenge of creating a machine that learns how to solve a problem without being given all the steps, so that it can either approximate or iteratively solve in a way better than how we already can.”

How did your background prepare you for working at Robust Intelligence?

Blaine is one of the first researchers in the world to work on adversarial machine learning, and has written a book about it. Of course, this never came up directly in conversation. As a true expert, Blaine talked more about the material than about his own accomplishments, impressive as they may be – he spoke so fluidly and easily about his field that, while listening, we almost felt as if we ourselves had been pioneers in securing the cybersecurity arms race.

There were many small things along the way that shaped Blaine’s passion for the topic. The first time he encountered machine learning was on a summer internship project at Duke in 2002. For the project, Blaine and his fellow students were interested in landmine detection, using various inference techniques to look for electrical signals that detect whether a landmine is present or not. 

It was one of his first experiences considering the productivity of using neural networks to solve a problem. “It didn’t work very well,” he said, laughing. “But the neural network was fun to play with, so I started getting obsessed with machine learning around then.”

A pivotal point for Blaine came in grad school, in his second year. “I had been working on a machine learning project with a couple of colleagues, which was concerned with antivirus detection. To be honest it was a very naive project, using basic statistics from email to detect whether an email had a virus or not – it was pretty clear that you could break the thing pretty easily.”

At the time, Blaine was also taking a class in computer security and machine learning. One of his peers had this idea to do a project that spanned both areas. “Basically, the idea was to imagine, and come up with a framework for determining, how secure a machine learning algorithm is.”  

Blaine wrote the project up with his colleagues, and it eventually became a paper entitled “Can Machine Learning Be Secure?” published in 2006 with Marco Barreno, Russell Sears, Anthony Joseph, and J.D. Tygar. The concepts at the backbone of the paper ended up becoming the core of Blaine’s research for the remainder of grad school and his doctoral research.

For readers familiar with Robust Intelligence, this paper – and Blaine’s subsequent research – clearly foreshadows the project of RI. At the time of Blaine’s studies, there was not a lot of academic research about security in machine learning, and Blaine and his colleagues helped to lay the foundations for the entire field generally. 

Today, securing machine learning against operational error and adversarial attack is the entire mainstay of Robust Intelligence. We are very fortunate to have Blaine as one of our leads and project managers; as one of the world’s foremost researchers, there’s no better person to help RI lead the AI/ML cybersecurity revolution.


What’s a day in the life as an engineer for Robust Intelligence look like?

Although Blaine’s official job title is Principal Machine Learning Engineer, he’s currently working on backend engineering. Right now, his role mostly entails ensuring that the company can deliver machine learning technology to its customers within a production environment. In other words, Blaine’s background in cutting-edge research is now being connected to its application in a commercial context, helping Robust Intelligence’s customers secure their AI/ML pipelines against operational risk.

“It’s a little difficult to give a typical day, because the days tend to diverge – in a good way,” Blaine said. “My mornings are spent collecting thoughts, figuring out what we’re doing next. Then I spend around half the day trying to write up various improvements, whether those be coding, documents, et cetera.”

The rest of Blaine’s day is spent in meetings, just talking through things with other engineers and team members at Robust Intelligence. As a high-growth startup, much of what happens at RI on a day-to-day basis is taking stock and planning for the next major project or company development. It’s an exciting atmosphere of change and acceleration, and Blaine helps to lead that momentum for the machine learning and backend teams.


What do you find the most exciting about working at Robust Intelligence? 

Blaine is a problem-solver at heart – a devout New York Times puzzle enthusiast and an intrepid hobbyist lockpicker. When we asked Blaine what he finds the most satisfying about his work at Robust Intelligence, he spoke about the simple joy he gets from just sitting down and having an algorithmic problem to solve. 

“That’s always been my biggest joy - how do I take a problem and find a way to solve it using the tools that I’ve learned over the last two decades.”

To me, there’s nothing more enjoyable than trying to take a general problem in the world and map it out into a sequence of steps that can be solved efficiently, or at least solved in a reasonable way by a machine. 


Why did you decide to join Robust Intelligence? What brought you to the job?

As one of the most successful pioneering researchers in the adversarial machine learning space, Blaine’s academic pedigree speaks for itself – but he’s interested in shaping the future of machine learning within the business world.

When we asked Blaine what drew him to Robust Intelligence specifically, he spoke about how he thinks the company serves to address what he calls an “overfitting” problem within the machine learning world currently.

As Blaine explained it, machine learning today is very effective at handling a lot of data. However, in his opinion, one of the flaws of the field is the fact that a lot of machine learning operates through a sort of “alchemy.” Give a ML algorithm data, and it will get better –but it’s hard to determine whether the system got authentically “better” as we expected it to, or if it just got better compared with a previous state.  Worse yet it’s often difficult or impossible for us to understand what learnings were made or where the ML model has potential vulnerabilities.

“This is the kind of thing that I think has been missing in the entire area of machine learning, and needs to be improved - and by having this issue of correcting “overfitting” as their focus at Robust Intelligence, as opposed to a secondary goal, is what makes Robust Intelligence special.”

“We’re over-reliant on machine learning technologies as a type of magic often without considering its limitations,” Blaine explained. As a company, Robust Intelligence seeks to address the overfitting issue and deconstruct some of the mysticism that underlies the black box of ML, tackling issues related to data insecurity and operational error.

To Blaine, one of the biggest draws to Robust Intelligence is helping people, companies, and systems identify where they have problems in their machine learning. He’s a puzzle-solver – and Robust Intelligence allows him to bring his love for puzzles into the work world, disentangling the errors in AI/ML pipelines of small startups through Fortune 500 giants.

Blog

Related articles

October 3, 2023
-
4
minute read

Robust Intelligence AI Firewall + MongoDB Atlas Vector Search: AI Security, Supercharged by Your Data

For:
August 17, 2023
-
5
minute read

Observations from the Generative Red Team Challenge at DEF CON

For:
January 30, 2024
-
4
minute read

AI Governance Policy Roundup (January 2024)

For:
No items found.