December 8, 2021
-
3
minute read

Announcing Robust Intelligence's $30M Series B

Perspectives

Today we're excited to announce an important milestone for Robust Intelligence: our Series B financing round. This fundraise signals a significant step forward, completing the first chapter in our company's story and heralding the beginning of an exciting new phase.

Since we founded Robust Intelligence, we've been building the AI Firewall, a piece of software that wraps around an AI model to protect it from making mistakes. An AI Firewall is an idea that was considered an impossible AI problem until recently. Even if the AI problem were to be solved, most of the engineers we interviewed were doubtful that the underlying platform could be built by a startup.

To make the seemingly impossible possible, we dedicate the vast majority of our waking hours to how it can be done. We develop engineering and product strategies that reduce challenging puzzles to incremental problems, which are addressed one by one by executable solutions. Now, as one chapter ends and another begins, we have the opportunity to pause for a moment and reflect on why we do it.

We live in a world where organizations adopt AI at an exponential rate and rely on AI models to make critical decisions. Alongside the enormous benefits of using AI, these models frequently fail. As a result, AI introduces significant risks to organizations and, importantly, to the people affected by the output of these AI technologies. The reason we're building an AI Firewall is so that we can help organizations eliminate risks associated with developing and deploying AI models throughout their business processes.

One striking example of AI risk we've seen firsthand is AI models used to identify fraudulent transactions. We've seen a class of models used by vendors that produce entirely different results depending on whether a single alphabetical feature in the input data is capitalized. Seemingly small mistakes like this can create significant economic losses for financial institutions and expose their customers to serious risk. These AI model failures were not due to intentional corruption of the data, but rather because the data was collected from multiple sources, where the input appeared in slightly different formats. Even subtle changes like capitalizing an alphabetical feature can result in unexpected model outputs and risks for organizations implementing AI.

Lending institutions and insurance agencies that rely on AI models based on statistical learning also need to protect against AI risk. In statistical learning models, the data used to train the learning model may be biased in favor of specific populations and create an AI model that makes biased decisions. Errors in such models can lead to outcomes where people are erroneously denied loans and health insurance.

Data science teams spend most of their time firefighting and debugging AI model failures in the absence of a product that protects and monitors AI models. This is a large burden for an organization and further, it is one that inhibits algorithm development and decreases scalability, both costly problems.

We're privileged to be working with forward-thinking AI and data science teams across a large variety of industry verticals that include finance, payments, travel, insurance, human resources, medical devices, genomic diagnostics, cloud services, networking, storage, real estate, and many others. These teams are mindful of AI risks and use Robust Intelligence to minimize the overhead costs of protecting and debugging AI models in production. From an idea of something that didn't seem possible, today a data scientist can integrate an AI Firewall with a single line of code, on-premise without their data ever leaving the organization.

To continue realizing the vision of what an AI Firewall we do, we raised a $30M Series B financing round. The round was led by Tiger Global, with participation from all of our existing investors, Sequoia Capital, Harpoon, and Engineering Capital. We feel fortunate to be supported by this cohort of visionary individuals who have been advocating for the mission and the company since its inception. This funding allows us to continue developing the AI Firewall and its outreach. In doing so, we'll be able to protect more models and reduce more risk.

Creating a robust intelligence is hard. We won't always get it right, and the path there will continue to include failures and anxieties alongside the small and big victories. Our greatest asset on this journey is not in the deep technology, product, or business processes we create. It is our unique team and the culture we build every day in our pursuit of making the seemingly impossible possible. Let's go.

Read more from TechCrunch here.

December 8, 2021
-
3
minute read

Announcing Robust Intelligence's $30M Series B

Perspectives

Today we're excited to announce an important milestone for Robust Intelligence: our Series B financing round. This fundraise signals a significant step forward, completing the first chapter in our company's story and heralding the beginning of an exciting new phase.

Since we founded Robust Intelligence, we've been building the AI Firewall, a piece of software that wraps around an AI model to protect it from making mistakes. An AI Firewall is an idea that was considered an impossible AI problem until recently. Even if the AI problem were to be solved, most of the engineers we interviewed were doubtful that the underlying platform could be built by a startup.

To make the seemingly impossible possible, we dedicate the vast majority of our waking hours to how it can be done. We develop engineering and product strategies that reduce challenging puzzles to incremental problems, which are addressed one by one by executable solutions. Now, as one chapter ends and another begins, we have the opportunity to pause for a moment and reflect on why we do it.

We live in a world where organizations adopt AI at an exponential rate and rely on AI models to make critical decisions. Alongside the enormous benefits of using AI, these models frequently fail. As a result, AI introduces significant risks to organizations and, importantly, to the people affected by the output of these AI technologies. The reason we're building an AI Firewall is so that we can help organizations eliminate risks associated with developing and deploying AI models throughout their business processes.

One striking example of AI risk we've seen firsthand is AI models used to identify fraudulent transactions. We've seen a class of models used by vendors that produce entirely different results depending on whether a single alphabetical feature in the input data is capitalized. Seemingly small mistakes like this can create significant economic losses for financial institutions and expose their customers to serious risk. These AI model failures were not due to intentional corruption of the data, but rather because the data was collected from multiple sources, where the input appeared in slightly different formats. Even subtle changes like capitalizing an alphabetical feature can result in unexpected model outputs and risks for organizations implementing AI.

Lending institutions and insurance agencies that rely on AI models based on statistical learning also need to protect against AI risk. In statistical learning models, the data used to train the learning model may be biased in favor of specific populations and create an AI model that makes biased decisions. Errors in such models can lead to outcomes where people are erroneously denied loans and health insurance.

Data science teams spend most of their time firefighting and debugging AI model failures in the absence of a product that protects and monitors AI models. This is a large burden for an organization and further, it is one that inhibits algorithm development and decreases scalability, both costly problems.

We're privileged to be working with forward-thinking AI and data science teams across a large variety of industry verticals that include finance, payments, travel, insurance, human resources, medical devices, genomic diagnostics, cloud services, networking, storage, real estate, and many others. These teams are mindful of AI risks and use Robust Intelligence to minimize the overhead costs of protecting and debugging AI models in production. From an idea of something that didn't seem possible, today a data scientist can integrate an AI Firewall with a single line of code, on-premise without their data ever leaving the organization.

To continue realizing the vision of what an AI Firewall we do, we raised a $30M Series B financing round. The round was led by Tiger Global, with participation from all of our existing investors, Sequoia Capital, Harpoon, and Engineering Capital. We feel fortunate to be supported by this cohort of visionary individuals who have been advocating for the mission and the company since its inception. This funding allows us to continue developing the AI Firewall and its outreach. In doing so, we'll be able to protect more models and reduce more risk.

Creating a robust intelligence is hard. We won't always get it right, and the path there will continue to include failures and anxieties alongside the small and big victories. Our greatest asset on this journey is not in the deep technology, product, or business processes we create. It is our unique team and the culture we build every day in our pursuit of making the seemingly impossible possible. Let's go.

Read more from TechCrunch here.

Blog

Related articles

November 1, 2023
-
4
minute read

Reflecting on the AI Risk Management Summit 2023 in Tokyo

For:
September 13, 2021
-
6
minute read

How NTT DATA Uses RIME to Increase Model Performance by 70%

For:
June 28, 2024
-
4
minute read

AI Governance Policy Roundup (June 2024)

For:
No items found.