Steve Escaravage, a senior vice president with Booz Allen Hamilton and a 2022 Wash100 Award winner, recently spoke with ExecutiveBiz for the publication’s latest Executive Spotlight interview regarding the research his team is working on to advance AI tools and solutions for Booz Allen’s customers.
In addition, Escaravage also discussed the ethical challenges in AI innovation as well as the struggle for the U.S. to fill in the technological gaps in AI for our military services branches and how can industry and academia can accelerate AI implementation for federal agencies
You can read the full Executive Spotlight interview with Steve Escaravage below:
ExecutiveBiz: What can you tell us about the research your team is doing for the benefit of advanced AI tools and how are you developing pivotal solutions for your customers in the defense and civilian sectors?
Steve Escaravage: “I personally believe it’s pivotal to keep your finger on the pulse of emerging methods and techniques. The field is moving so quickly. In previous waves of big data analytics and data science, we would see six-to-twelve-month innovation cycles, but these AI waves have been much shorter. I try to keep our teams focused on two things.
First, we have an original research team whose work is being published in best-in-class, peer reviewed journals and presented at conferences. They’re looking at emerging methods and techniques, and application of those techniques to the federal, U.S. public sector missions we support.
They’re looking at the advancements in natural language processing and how they would apply in the context of the intelligence missions we support, the defense missions we support, and things like entitlement programs. For example, how could we use emerging methods and techniques to increase personalization in entitlement programs on the civil side of the business?
The second part of our research effort is the applied component. This involves attempting to replicate the success in the research environment and laboratory settings under the real constraints that we see daily in the government missions that we support. Here, a considerable focus is bringing AI capabilities to the edge. How do you package, deliver, and update learning models and processes at the edge, which can be very different in the defense sector versus others like healthcare?
And then there’s robustness. How do you understand the sustainability of artificial intelligence capabilities when you deploy them into production? I always like to say that when you release these capabilities into production for the first time, that’s when the game begins. That’s when it gets busy with trying to figure out how to sustain these capabilities, push updates, and be robust in the face of active adversaries in some cases in defense and intelligence missions.
It’s important to have one foot in the research, especially given how fast innovation moves in this field. Booz Allen stays up to date on AI innovation through a combination of partnerships and dedicated experts. We have a team at Booz Allen that performs tech scouting and environmental scanning. That’s all they do, day in and day out, and they are extremely proficient at identifying potential disruptive technologies. They also monitor academic publications to try to understand what is new and exciting.
In addition, academic partnerships are important, and so are research partnerships, especially with large technology companies. We highlight the relationship that we have with Databricks, for example. We’ll ask them what they’re integrating into their tool sets and what’s on their roadmap. That gives us a sense of what will be available in the next three to six months.
From a policy perspective, there have been exciting announcements around potentially creating a National AI Research Resource. I think that’s incredible and much needed to stay competitive on the global scale. The one thing that’s unique about artificial intelligence, specifically machine learning based methods, is that it’s different than traditional software where the research process continues into the initial engagement with users in real-world environments.
I believe we have to expand our definition of research in this context and make sure we’re including that applied research – not just getting capability into production and real-world use, but then optimizing it in real world use as part of the research process so we don’t hit a period of disillusionment. We’re not done with the research once we hit end users for the first time. In some cases, that’s when it begins.”
ExecutiveBiz: When you’re developing AI and machine learning solutions for federal agencies, what are the short-term and long-term ethical challenges that will need to be addressed in policy and other areas as tech adoption becomes a necessity to gain an edge over our adversaries?
Steve Escaravage: “When we talk about responsible AI, it all comes down to trust. As the user and the one interacting with AI, you have to be able to trust the design process, the designer, as well as how it was deployed and tested. We’re going to need to develop ways to build trust and create safeguards.
I do believe there is sufficient policy and regulation out there to begin to guide the industry, but there’s an opportunity for additional guidance like we’ve seen in the medical device industry. They have formal publications and guidance for the industry that spell out the best practices for approaching certain aspects of the design, the manufacturing process, and the distribution.
One thing that I do find is that specifically in the context of ethics or fairness or responsibility, there seems to be a negative context to that conversation today. As somebody who’s been practicing in the field for quite some time, I agree that we need to adopt controls and safeguards, and there are risks that are somewhat novel related to the technology.
However, I think there’s a huge upside to making our systems even safer and fairer. We are obsessed with the negative, but there is a real opportunity to make processes better through AI integration, and I’m excited about that.
The final report of the National Security Commission on Artificial Intelligence states the Department of Defense should achieve a state of military AI readiness by 2025.
I like that as a concept, being AI ready, but what does that actually mean? It probably means understanding the state of the art and understanding the use cases to recognize an opportunity to use these capabilities within your organization. After that, it comes down to understanding the conditions in which they work well and their limitations.
I think that industry, in support of the federal government, should be responding to that call to action. We should all be working together to be AI ready by 2025. I actually think that’s table stakes. We have to get there.”
ExecutiveBiz: Previously, you’d expressed concern about the potential for the U.S. to lose its military-technical superiority. What can you tell us about the gaps in AI that need to be addressed within our military service branches?
Steve Escaravage: “There’s a line in the final report of the National Security Commission on Artificial Intelligence that was profound for me, which talks about the U.S. losing its technological edge. That hit me hard. I hope that it hits everyone in our country pretty hard because that is a scary observation. I am concerned about the urgency that is being demonstrated and the scale of the investment we need to make.
There are some efforts underway to inventory or catalog AI investments today, which I think is a great start. I’m glad that was a requirement put into some of the authorization acts. But I also ask myself frequently, “What about the complement of that? What are the investments that are not being made today?” I think we’ve seen enough from AI as a technology in the commercial space that we need to press the ‘I believe’ button and really invest at scale in some of these projects that could take some time to put together.
Cyber is a machine speed domain. Understanding irregular and potentially illegitimate activity on networks and automatically identifying vulnerabilities are must-have capabilities. I would argue, in the cyber domain, we are in a technology race against our strategic competitors – and the game is on.
In the context of the military, just think about how quickly things will occur in the future based on artificial intelligence and the human-machine teams that will be developing and assessing courses of action for maneuver, for intelligence collection, and if necessary, on a battlefield.
These are capabilities that are going to be AI driven in the future sooner than we’re probably ready for. Now is the time to start investing in the protocols for safe use. We are in a daily cat and mouse game with nation-state advanced persistent threats. We are operating under our ethics and values, which may be different from those of our adversaries.
We are constantly in a state of engagement, and it’s difficult to imagine not having the best capabilities in the world in this area, given the potential implications to federal agencies and everyday citizens. I think this is something that is going to require continued, sustained, significant investment so that we can continue to lead the world.”
ExecutiveBiz: With industry and government being on different levels of AI adoption and tech innovation, what can you tell us about the limits and barriers facing both sides long-term? And how can industry, academia and partnerships increase collaboration to accelerate AI implementation for federal agencies to ensure the benefit of U.S. national security?
Steve Escaravage: “The things that come to my mind are limitations from the federal side. Budget uncertainty is a factor. Whether it’s an annual process, or we develop a three-year budget, but we develop it 18 months in advance, this technology is moving too fast for the way we currently handle budgeting.
Plus, there’s a lot of inertia to change. When you’re operating at such a high security posture, and you have to maintain continuity of mission, all of those things collide. It is easy to experiment with AI on the fringe, but in a high security posture, it is really hard to consider a cutover date like we do in a commercial environment, where you set a date and work through the challenges.
The good thing about the federal government is that while there is inertia, when something does break through and there is momentum, there’s conviction. I do think it’s possible for the government to invest, like they have done historically, in programs over a period of time at scale that lead the world.
I still believe that. It has to be done in collaboration with the commercial industry. We just cannot ignore the scale and the investment that’s been made there. One thing I would love to see is tackling this challenge of operational security and security architecture.
One of the biggest limitations to integrating and using AI today is the lack of approved computing environments where you can take capability from unclassified to classified levels and you can complete the feedback loop of deploying capability into production operations and real world missions, and then understanding how the machine learning models are performing in those environments.
The computing environments and networks where you are typically tuning and calibrating AI systems are not at the same level of security or behind the same security architecture as production systems. That creates this dilemma. We’ve got to create that feedback mechanism that doesn’t exist today.
We can do it. It’s been done before. We just need a sense of urgency and sustained investment to get it done. I just think there’s incredible opportunity for this technology, and I do believe we’ve seen enough to have confidence that it provides transformational potential.
We do not seem to be moving fast enough, though, as a nation. As a general public, the court of public opinion needs to be championing this so the government has the support to move forward. The future is going to be awesome. We could get there sooner if we just make a commitment and start leaning forward.”