Artificial Intelligence - Federal News Network https://federalnewsnetwork.com Helping feds meet their mission. Thu, 18 Jul 2024 20:01:02 +0000 en-US hourly 1 https://federalnewsnetwork.com/wp-content/uploads/2017/12/cropped-icon-512x512-1-60x60.png Artificial Intelligence - Federal News Network https://federalnewsnetwork.com 32 32 How the State Department is leaning into AI, modernization efforts to support federal workers https://federalnewsnetwork.com/artificial-intelligence/2024/07/how-the-state-department-is-leaning-into-ai-modernization-efforts-to-support-federal-workers/ https://federalnewsnetwork.com/artificial-intelligence/2024/07/how-the-state-department-is-leaning-into-ai-modernization-efforts-to-support-federal-workers/#respond Thu, 18 Jul 2024 19:30:02 +0000 https://federalnewsnetwork.com/?p=5080347 As technology continues to evolve and reshape entire industries and work environments, the federal workforce is no exception.

The post How the State Department is leaning into AI, modernization efforts to support federal workers first appeared on Federal News Network.

]]>

Michele Sandiford |

As technology continues to evolve and reshape entire industries and work environments, the federal workforce is no exception — they must adopt innovative technologies in their focus on global talent management in order to enhance productivity, efficiency, and effectiveness of both the individual employees and the overall agencies.

Don Bauer, chief technology officer for global talent management at the Department of State, said that, in today’s times, “every single thing we do has a nexus with technology.”

“That’s part of my job — not only to make sure that we have technology, but to make sure that the actual technology interacts well with the rest of the technology that we have,” Bauer said.

The Department of State, according to Bauer, supports a global workforce of 278 locations across the world — and, “when it comes to technology and having systems talk to each other, it’s always a challenge when you have to integrate platforms.”

“The biggest challenge in the federal government has been, ‘I don’t want my data going outside into other people’s systems,’” Bauer said on Federal Monthly Insight — Trustworthy AI in the Workforce.

Challenges to modernization

For Bauer, keeping as much corporate IP within the department’s own control, as opposed to putting it into a third-party platform, is ideal “because [platforms] go away, they change. And then you eventually have to take that logic and put it somewhere else.”

Much of modernization efforts happen because they have to, Bauer said. He points to the cyclic nature of his organization — recurring seasonal bidding seasons and performance management cycles, to name a few — as another challenge to accomplishing that.

“HR modernization is somewhat unique in the fact that we don’t get to stop doing our jobs while we’re modernizing,” Bauer said. “We have to continue to fly the plane while we’re working on it, because pay doesn’t stop, promotion doesn’t stop. These cycles continue, and the systems have to support it.”

Leveraging the power of trustworthy AI

Some technologies, like the transformative technology of artificial intelligence (AI), showcase a great deal of promise when it comes to implementing new, effective and efficient solutions for the federal workforce.

AI is already making significant strides in the federal sector. Bauer said the Department of State has already started to implement generative AI internally, with what they currently call “state chat,” where users can upload documents and ask questions related to those documents.

“If I can upload 100 policy documents, and then interactively ask a question about it, that’s powerful,” Bauer told the Federal Drive with Tom Temin. “It brings it to the masses, like you say, I don’t have to be a guru in order to get it. And the beauty of what they’re building right now internally is, every single answer comes a little icon with an eye. You click that eye and it shows you where it got that data.”

The quick and easy ability to identify the source of AI’s answer is key to its trustworthiness and use in the federal workforce, according to Bauer.

“Not only do I want the answer, but I want to know where it came from so that I can make sure that it isn’t a hallucination,” he said.

Embracing modernization in global talent management

To support global talent management, federal agencies are implementing comprehensive talent acquisition and retention strategies. Perhaps just as important, the use of technology to modernize these strategies and processes is helping to streamline recruitment and onboarding efforts.

The integration of advanced technologies and strategic global talent management is transforming the federal workforce. Modernization plays a crucial role in this transformation, keeping federal agencies and their workers poised and ready with the best tools to succeed.

Bauer says connectivity and integration are paramount to building the optimal modern user experience.

“I’m still kind of weaving my way through my legacy platforms,” Bauer said. “So, I call it ‘subsumption’. I’m subsuming a lot of these tools into my current platform, which is ServiceNow as my front end. … I’ve already built all this connectivity, I have integration with my personnel system with my electronic personnel records, all those integrations are built on one platform.”

He explains that he then doesn’t have “all the extra integrations to manage.”

“I don’t have all this extra overhead because every single integration point now is a vulnerability, potentially, and it has to be remediated if there’s security,” Bauer said. “So, this is reducing my footprint while consolidating and giving the modern user experience. So, it’s kind of like, it’s a win-win, but it’s a slow process.”

The post How the State Department is leaning into AI, modernization efforts to support federal workers first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/artificial-intelligence/2024/07/how-the-state-department-is-leaning-into-ai-modernization-efforts-to-support-federal-workers/feed/ 0
The power of AI, data in preparing for the next national emergency https://federalnewsnetwork.com/commentary/2024/07/the-power-of-ai-data-in-preparing-for-the-next-national-emergency/ https://federalnewsnetwork.com/commentary/2024/07/the-power-of-ai-data-in-preparing-for-the-next-national-emergency/#respond Tue, 16 Jul 2024 16:39:01 +0000 https://federalnewsnetwork.com/?p=5077133 The potency of AI and implementation of data enabled missions hinges on skilled talent.

The post The power of AI, data in preparing for the next national emergency first appeared on Federal News Network.

]]>
Our nation is currently embroiled in multiple geopolitical theaters, and our government is working hard with allies and partners around the world to ensure resilience and mission success. Simultaneously, we can’t let global events and international needs halt or impede innovation at home on the civilian front and for the good of U.S. citizens.

Core civilian agencies are tasked each day with ensuring U.S. prosperity, continuity and trust on a national level — and these agencies now find themselves in a unique position at the intersection of massive mission needs combined with increasing volumes of data. The key to pulling it all together: artificial intelligence, the most important technology advancement in a generation. Stemming from last year’s Executive Order on AI, deadlines are quickly approaching for agencies to comply with the EO and Office of Management and Budget requirements, serving as a critical impetus to ensure the resiliency of the country — powered by data and AI — no matter what is happening on the global scale.

It is critical that civilian agencies forge ahead with robust, coordinated, scalable and repeatable strategies to take advantage of AI and the power of data to prepare all-of-government responses to not only maintain equilibrium, but also prepare to meet challenges at home — from extreme weather events, public health crises drawing on lessons from the COVID-19 pandemic, financial and critical infrastructure threats, and beyond. In an era marked by geopolitical tensions, climate crises and cybercrime, being prepared is not merely an option but a necessity.

AI and data: The key to empowering critical civil agencies

So what does preparation look like in action? Technology and data are not just tools, but lifelines that can significantly impact emergency responses and day-to-day operations. Here are three critical areas where AI-powered and data-enabled mission approaches can revolutionize civilian and public sector efficiency and efficacy:

  1. Climate resilience: As natural disasters become more frequent and severe, the need for comprehensive data sharing across agencies has never been more urgent. AI can process vast datasets rapidly, pinpointing at-risk communities and extending the lead time for extreme weather forecasts, turning hours into minutes and saving lives in the process.
  2. Public health: Early detection of public health threats can prevent them from spiraling into endemics and full-blown pandemics. Through enhanced data sharing between local and federal entities, and AI-driven pattern recognition, agencies can quickly identify potential outbreaks, ensuring that preparedness is a step ahead of the problem.
  3. Fraud Prevention: The importance of bolstering the resilience and security of systems cannot be overstated. A recent advisory from the Cybersecurity and Infrastructure Security Agency highlights the threat of nation-state hackers targeting civil society organizations to destabilize democratic values. The repercussions of such cyberattacks have already disrupted our healthcare systems. By employing AI to continuously monitor and analyze systems and data, and to respond to security breaches and fraudulent activities swiftly, we can enhance the integrity of our civil agencies and protect the interests of our citizens. This proactive approach is vital in safeguarding our nation, its people, and our democratic way of life against nefarious threat actors.

Investing in the future: The human factor and reimagined automation

The potency of AI and implementation of data enabled missions hinges on skilled talent. To meet the challenge of tomorrow, agencies need to double down on investing in their people to modernize their workforce the same way they are modernizing their technology. Reflecting on how the widespread availability of Microsoft Office tools transformed workforce skillsets three decades ago, it’s clear that tools alone do not suffice; adoption and proficiency in their use does.

Today, we find ourselves at a similar juncture with AI and data. It’s not just data scientists and people in technical roles who need to become proficient — it’s everyone. You don’t need to be a data scientist, but you do need to be data fluent. As technology perpetually evolves, the constant that remains is the people behind the machines. Success hinges not just on having the latest technology but on working collaboratively to leverage these tools for better mission outcomes.

Tied to reimagined talent development in the quest for public sector modernization, it will be paramount to transition from manual to automated processes, particularly in data management and emergency response. Reimagining workflows with AI and real-time data can free up agency staff to focus on strategic priorities and empower urgent, data-informed action in crisis situations. Civil agencies must make their data readily available to stakeholders and be equipped with AI tools and proficient personnel to deploy solutions at a moment’s notice when American lives and livelihoods hang in the balance.

A call to action

Agencies need to balance today’s demands with tomorrow’s potential. As we continue to navigate the digital age, the mandate for civil government agencies is clear: Embrace technological advancement, invest in talent, and create and maintain a proactive roadmap for modernization. There’s greater awareness and excitement about civil agencies being able to solve challenges through better use of their data. While agencies are at different points in their digital transformation journeys, the potential to overcome challenges with data is becoming more apparent.

The challenge is that agencies need to deliver on the missions in front of them today with the tools they have, while taking modernization steps to build the road for tomorrow. Only then can we truly safeguard and serve the American public.

Richard Crowe is president of the civil sector at Booz Allen Hamilton, the leading provider of AI services to the U.S. federal government.

The post The power of AI, data in preparing for the next national emergency first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/07/the-power-of-ai-data-in-preparing-for-the-next-national-emergency/feed/ 0
‘An extraordinary opportunity’: How HHS uses shared certificates in hiring https://federalnewsnetwork.com/hiring-retention/2024/07/an-extraordinary-opportunity-how-hhs-uses-shared-certificates-in-hiring/ https://federalnewsnetwork.com/hiring-retention/2024/07/an-extraordinary-opportunity-how-hhs-uses-shared-certificates-in-hiring/#respond Fri, 12 Jul 2024 18:15:09 +0000 https://federalnewsnetwork.com/?p=5073319 At the Department of Health and Human Services, using shared certificates, in some instances, has cut the agency’s time-to-hire in half.

The post ‘An extraordinary opportunity’: How HHS uses shared certificates in hiring first appeared on Federal News Network.

]]>
Federal Insights- Shared Certs- 7/11/2024

Derace Lauderdale |

In just the last couple years, shared certificates have become an increasingly popular recruitment practice across government — and the impact is hard to miss.

At the Department of Health and Human Services, using shared certificates in some instances has cut the agency’s time-to-hire in half.

“We are seeing significant impacts in terms of hiring efficiencies, and we seek to further increase that share of hiring that takes place, not just for the HR shared service centers, but across the department,” HHS Chief Human Capital Officer Bob Leavitt said on Federal Monthly Insights —Trustworthy AI in the Workforce.

Over time, HHS has increasingly relied on shared certificates, particularly for the types of positions that are similar across many of the department’s organizations, Leavitt told Federal News Network. He called shared certificates “an extraordinary opportunity.”

“One, it uses our resources more productively, and two — and more importantly — from a candidate’s perspective, the sooner we’re able to follow up and eventually onboard a candidate, the better,” Leavitt said.

With shared certificates, agencies or offices that make a hire can then give their list of un-hired candidates, already determined to be qualified for a position, to another agency or office hiring for the same position. And hiring for the same type of job happens quite often, Leavitt said.

“There really are fewer unicorn positions out there than we all imagined,” he said.

Using shared certificates shortens the hiring process by using candidates who are already vetted and assessed by hiring managers and deemed qualified for a position. If multiple candidates are hired off of one certificate, that can cut time-to-hire even further.

“It affords selecting officials a quicker mechanism to bring people into the workforce and meet their needs more efficiently,” Kimberly Steide, associate deputy assistant secretary for human capital at HHS, said in an interview with Federal News Network.

That then allows HR managers to focus more strategically in terms of how they spend their time, Steide added.

And the process isn’t only helpful for hiring managers — using shared certificates benefits job candidates as well. They can be considered for positions they might not have otherwise known were out there.

“You can apply once and be considered for multiple vacancies as they come up open,” Steide said. “That expands that applicant’s reach in terms of what’s available to them.”

HHS shared certificates by the numbers

The idea of sharing certificates isn’t new, as it stems from the 2015 Competitive Service Act and subsequent guidance published in 2018. But the practice has gained much more traction in just the last few years.

At HHS, the use of shared certificates began years ago in just a handful of components, like the National Institutes of Health, the Food and Drug Administration, and the Centers for Disease Control and Prevention. But it’s more recently become a fully departmentwide effort.

Between 2020 and 2023, HHS hired nearly 12,000 employees off of shared certificates, and increased its shared certificate hires by 33%.

HHS hires made from shared certificates

2020 2021 2022 2023 Total
2,680 2,737 2,950 3,555 11,922

Note: Hiring numbers exclude FDA data, which was not immediately available.

Currently, HHS’ Office of the Secretary is the greatest user of shared certificates. Nearly half of all hires made from shared certificates in the last four years have gone through that office, including all of its staffing divisions.

This year, 11% of hires made through the Office of the Secretary’s HR shared servicing center have been pulled off of shared certificates.

“It might seem like a small number, but that’s coming from a vastly smaller number, and it is increasing significantly,” Leavitt said.

HHS involves SMEs in recruitment

For HHS, like many agencies, a crucial part of the recruitment process is involving subject-matter experts (SMEs) when writing job announcements and assessing candidates. Federal hiring experts say SMEs — usually officials working directly in the office that’s recruiting — offer a helpful perspective on what hands-on skills a candidate would actually need to be qualified for a position.

Especially with recruitment efforts that use shared certificates, HHS involves SMEs when writing solicitations, as well as when reviewing candidate pools.

There are, however, busier or more challenging times of the year for SMEs to be able to take the time to get involved in recruitment. But Leavitt and Steide said they’ve found the officials to be generally willing to offer their support, as it helps their office land a better job candidate at the end of the process.

“We do have to be attuned to the broader environment, but overall, people appreciate the opportunity to engage,” Leavitt said. “But we have to do our bit as well to make sure that the timing works.”

‘HireNow’: The back-end of sharing certificates

HHS’ recruitment arm is massive, involving thousands of hires annually. Underlying the entire recruitment process, HHS uses a platform called “HireNow.” The site compiles tens of thousands of active resumes for hiring managers to sift through when looking for a good fit for an opening at the department.

Right now, there are about 3,000 active job announcements on HireNow that are open to shared certificates, with another 600 or so upcoming announcements. And so far for 2024, HHS hiring managers have selected nearly 900 candidates from shared certificates on HireNow, along with dozens more pending selections.

Currently, there are more than 103,000 active resumes available on HireNow that are open to shared certificates.

“Of course, that’s a large volume to go through,” Leavitt said. “But we’re able to filter that by job series, by grade, and other factors, to help really narrow in on the available pool of candidates that hiring managers across the organization can refer to, rather than starting afresh.”

Image of HHS HireNow platform
Screenshot of HHS’ HireNow platform, depicting the platform’s filter options for job announcements. (Source: HHS)

Once logged into HireNow, HHS hiring managers can view both current and upcoming job announcements employing shared certificates, as well as job announcements that won’t be using shared certificates.

For instance, right now, HHS is actively searching for supervisory physicians, health science administrators and data scientists — all of which have a shared certificate available for hiring managers to use. And coming soon, HHS is opening job announcements for management analysts and administrative specialists.

A list of several current job announcements, shared with Federal News Network, show many originating at the Health Resources Services Administration (HRSA). But with the use of shared certificates, those announcements could later be opened to other HHS components for hiring managers to review and select other candidates.

Image of HHS HireNow platform
Screenshot of HHS’ HireNow platform, showing a list of several current job announcements that are using shared certificates. (Source: HHS)

Application data on HireNow gets fed in through USA Staffing, a talent acquisition system run by the Office of Personnel Management. The system manages federal job applications that come in, and lets hiring managers track and assess applicants. On top of job applications sent directly to HHS, HireNow also intakes information from OPM’s broader talent pools portal meant for sharing certificates governmentwide.

Combining data from the two platforms “makes it easier for our selecting officials, so that they don’t have two places that they need to go to look for resumes,” Steide said.

When creating job announcements in HireNow, HHS staffing specialists will denote whether submitted applications will be shared more broadly. For announcements using shared certificates, candidates are automatically opted into the process, but have the ability to opt out. Once a shared certificate is issued, it remains active in HireNow for 240 days.

“That means that a candidate can apply and be considered for the next 240 days for anything that might come up across the department, which could be quite expansive,” Steide said.

Even with the success, HHS said it’s still working to consolidate the data and processes for sharing certificates. Although the HireNow platform is available for all HR offices to use, not all of them actually use it when going through the shared certificate process.

Shared certificates across government

Of course, HHS is far from the only agency that uses shared certificates — and for all agencies, the process of sharing certificates generally happens one of two ways.

One option is for OPM to initiate a governmentwide pooled hiring announcement. Agencies can then sign onto the announcement and select from a list of qualified job candidates for a common position.

The other option involves a specific agency initiating its own shared certificate announcement. That announcement can either stay internal to share just among different components within a large department, or otherwise get shared more broadly with agencies across government.

HHS uses shared certificates in multiple ways, and the announcements are not always departmentwide. For example, different divisions can also move shared certificates from job announcements they’ve already done in their specific office, and later post them to HireNow for other components to view and make selections.

To decide where and what positions to use shared certificates for, Steide said HHS often looks at where the most vacancies are. But it’s also not as simple as that.

“We’ve had a lot of success with pooled hiring for military spouses [and] for public health associates, which is a huge occupation that spans across the department,” Steide said. “So it really depends on where we have vacancies, where we have the most need, and … unique situations, where we can have one certificate that we can maximize across the department.”

For internally shared certificates, Steide said HHS will look across the department to figure out which occupational series would be the best fit for a pooled hiring effort.

Once an HHS component or office creates an announcement with a shared certificate, that component then has about 40 days to assess and select candidates, before the candidates become available for selection at HHS more broadly.

Depending on the number of certificates that are available on a job announcement, it can be a time-consuming process, but the value is clear.

Steide said, “the amount of effort and time that you put in on the front end just yields you a better product at the outcome.”

The post ‘An extraordinary opportunity’: How HHS uses shared certificates in hiring first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/hiring-retention/2024/07/an-extraordinary-opportunity-how-hhs-uses-shared-certificates-in-hiring/feed/ 0
How a factory approach can accelerate agency use of AI https://federalnewsnetwork.com/innovation-in-government/2024/07/how-a-factory-approach-can-accelerate-agency-use-of-ai/ https://federalnewsnetwork.com/innovation-in-government/2024/07/how-a-factory-approach-can-accelerate-agency-use-of-ai/#respond Fri, 12 Jul 2024 00:35:02 +0000 https://federalnewsnetwork.com/?p=5072535 JP Marcelino, the AI/ML Alliances manager for Dell Federal, said when agencies use a framework to identify AI use cases and technologies they can move faster.

The post How a factory approach can accelerate agency use of AI first appeared on Federal News Network.

]]>

There are more than 700 use cases in the federal inventory for artificial intelligence. Of those use cases, as of Sept. 1, the Energy Department had the most with 177, followed by the Department of Health and Human Services with 156, the Commerce Department with 47 and the Department of Homeland Security with 41.

The thirst for using AI isn’t just about use cases. The amount of money agencies are spending on AI tools and capabilities is growing. From 2020 to 2022, for example, agencies spent $7.7 billion, according to market research firm Deltek. That’s a 36% increase over three years. And this doesn’t include all the funding that goes into systems embedded with AI, such as the DHS insider threat infrastructure or the Department of Veterans Affairs’ health and data analytics platform.

The data for 2023 and 2024 will show even more investments, particularly with the relatively new excitement over generative AI.

Over the last few months, agencies have started to follow the Office of Management and Budget’s direction to offer controlled uses of GenAI tools like the Air Force’s new platform called the Non-classified Internet Protocol Generative Pre-training Transformer (NIPRGPT), which the service hopes can help with tasks such as coding, correspondence and content summarization.

The Energy Department also released a new reference guide for using GenAI. The guide provides an understanding of the key benefits, considerations, risks and best practices associated with GenAI.

JP Marcelino, the AI/ML Alliances manager for Dell Federal, said most initial forays into AI by agencies fall into two types: traditional or discriminative AI, used to detect patterns and for simpler analytics; and generative AI, where agencies are starting to generate new content based off of their data.

Right people, right tech in the AI factory

While agencies are more comfortable with the traditional or discriminative AI use case, slowly they are starting to figure out how they can use GenAI, particularly in a more secure manner.

“When it comes to GenAI, there’s still a lot more carefulness that needs to be done to make sure that nothing’s being exposed from a security standpoint, and making sure all of your data is managed and secured in a way that doesn’t get exposed,” Marcelino said on the discussion Innovation in Government sponsored by Carahsoft. “I still think there’s a challenge around the AI workforce that’s capable of developing these solutions. In order to alleviate and offset some of those deficiencies, part of it is just looking for the right kinds of partners that can help develop these solutions. No one’s ever going to find a single partner or a single software provider that can solve everything there is to develop an AI solution. It really takes a village to develop these solutions. So whether it’s a partner that can help you out early on in the process of figuring out use cases to tackle and focus on, or partners that are more in the line of helping you develop your solutions and put together proof of concepts and move them into production-ready environments for you, I think it’ll take quite a bit of effort from numerous partnerships to be able to solve every challenge along the way.”

To that end, Marcelino said Dell Technologies is leaning into the concept of an AI factory. He said this approach provides a framework to accelerate the implementation of AI capabilities.

“We are really helping customers understand the potential use cases that they want to tackle from an AI standpoint. We are helping them understand what kind of data they have to tackle those potential use cases, whether it’s good or bad data; do you have enough of that data to not only train a solution, but also make sure that you can validate that solution as well?” he said. “Then, there are the three pieces in the middle that help enable the AI capability from a solution standpoint. One is the infrastructure and hardware piece and the ability for us to provide the right kind of AI infrastructure and hardware for the given use case. If you’re looking at a really complex AI solution that requires very large language models to develop a solution, you may be looking at some really high-end compute to be able to support that kind of capability. But at the same time, if you’re a single user, just looking at some kind of AI sandbox, or want to start developing or testing smaller AI models locally, you may not need such high-end compute for that. You may need some kind of faster workstation that can support a single GPU, for example, or some really lower end compute that can handle a handful of users simultaneously.”

Data remains key to the success

The AI factory can help agencies close existing gaps in the workforce, the challenge of moving the tools into production and in addressing data quality and management challenges.

“You can just easily have an AI solution that can be garbage-in and garbage-out, so you want to make sure not only you have good quality data, but also have the ability to have a good data management strategy around it so that you can pull that data from the right places and be able to have good quality data to feed into an AI solution in order to achieve the right kind of accuracy and outcomes you want out of an AI solution,” Marcelino said. “When it comes to moving AI solutions from pilot to production, there’s a pretty low success rate of AI solutions that make it to production. There’s a lot of challenges that are involved with that, whether it’s not getting enough accuracy out of your AI solution, it’s not meeting the right types of outputs or outcomes that you’re looking to achieve from that solution or it can be something as simple as it’s taking too much time to achieve the accuracy that you’re looking to develop.”

One way to help address this challenge, he said, is through a machine learning operations (MLOps) strategy, which helps organizations more easily automate the continuous training and deployment of ML models at scale. It adapts the principles of DevOps to the ML workflow.

“I think there’s ways to help alleviate some of those challenges. Implementing things like an MLOPs strategy, so you have better visibility into the models that you’re developing and looking to deploy,” he said. “Being able to leverage solutions that can do things like help augment the development process, whether they’re things like auto ML tools, for example, to essentially use AI to develop AI solutions. Or leveraging solutions like AI factories, where we’ve taken a lot of the guesswork out of being able to deploy an AI solution into production, where we can essentially provide an end-to-end capability that encompasses partner solutions with our infrastructure and hardware, with the ability to fold in other types of solutions to really package it up in an environment that’s been pre-validated and makes it lower time-to-value to deploy these solutions.”

Listen to the full discussion:

The post How a factory approach can accelerate agency use of AI first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/innovation-in-government/2024/07/how-a-factory-approach-can-accelerate-agency-use-of-ai/feed/ 0
AI-enabled digital twins are transforming government critical infrastructure https://federalnewsnetwork.com/commentary/2024/07/ai-enabled-digital-twins-are-transforming-government-critical-infrastructure/ https://federalnewsnetwork.com/commentary/2024/07/ai-enabled-digital-twins-are-transforming-government-critical-infrastructure/#respond Thu, 11 Jul 2024 18:30:22 +0000 https://federalnewsnetwork.com/?p=5071975 AI-enabled digital twins are yielding powerful benefits for government teams.

The post AI-enabled digital twins are transforming government critical infrastructure first appeared on Federal News Network.

]]>
Digital twins, increasingly deployed in both public and private sector organizations, are prized for their ability to create a virtual model of a physical object or space, and for their ability to show how these objects interact with their environment. Especially as digital twins become AI-enabled, federal IT leaders are finding that these solutions help lead to better decision-making along with lowered costs and increased safety and efficiency.

Digital twins solve key challenges for government critical infrastructure

Digital twins allow an organization to dynamically model and fine-tune processes via virtual abstractions of a real-world entity and are typically informed by ongoing and often real-time data inputs. While digital twin technologies are vital for revolutionizing operations in many industries, nowhere is the need greater and the benefit more impactful than in the public and critical infrastructure sectors.

By its very nature, critical infrastructure requires high reliability and minimum downtime to support essential mission operations, whether that involves keeping production on schedule for much-needed military aircraft, maintaining power plant operations, or ensuring rapid development of an essential highway project. These models can also be used to monitor the safety of aging infrastructure or identify how city projects will impact its citizens. Digital twins enhance quality and save time by conducting design and analysis in the virtual world, making them especially useful in meeting these heightened operational demands within a government critical infrastructure setting.

The addition of artificial intelligence (AI) has further enhanced a digital twin’s value in government. AI-enabled digital twins can automatically strategize process workarounds in defense manufacturing to avoid downtime from equipment failure, streamline ER and ICU operations in a Department of Veterans Affairs hospital, or help a public health agency speed vaccine development with predictive modeling for new production lines. These are just a few examples of how AI boosts a digital twin’s capacity to support mission-critical operations through stronger performance, accuracy, scalability and predictive capabilities.

Digital twins facilitate government innovation

Given the benefits described above, it is not surprising to see that 63% of federal agencies are already investing or planning to invest in digital twins. To get the most out of the investment, federal transformation teams should carefully select the type of deployment that best fits the specific use case, with digital twin options generally broken down into three types.

A descriptive twin is an engineering design and visual representation that embodies all knowledge of a physical object or set of objects; these are especially useful for training purposes or in architectural modeling. An informative twin is similar to a descriptive twin but features an additional layer of operational and sensory data to extract performance-related insights. Finally, a predictive or autonomous twin includes updatable models that allow the digital twin to iteratively learn and take action autonomously within the organizational IT system.

Not surprisingly, government technology leaders are finding the full range of these options to be useful in both optimizing current use cases and in supporting entirely new innovations in critical infrastructure that weren’t previously possible. Consider the example of how digital twins can aid in the design of a cutting-edge fusion power generation facility:

Fusion power generation has the potential to provide near-limitless and highly sustainable energy, but developing production capabilities at scale requires intensive computational resources and artificial intelligence for development and testing. Engineering teams can achieve this by linking supercomputers with digital twin prototyping models to run the massive amounts of modeling and simulations needed for fusion research. Digital twins future-proof the development lifecycle with sensor-driven feedback loops that continually incorporate new data and metrics as technologies mature.

Key implementation priorities

For all their promise, successful digital twin deployments require government transformation teams to make the right design and configuration choices during implementation. One key priority is to ensure the correct data is being gathered and analyzed. This involves choosing the appropriate data to target, and then choosing the right design and placement of sensors and edge systems for optimal collection and analysis of that data.

Security must be a top priority during implementation to ensure seamless collaboration. Any breach in the security of a digital twin could potentially compromise the entire physical system it represents, leading to significant operational, financial and even safety risks. Therefore, thorough security measures, including data encryption, access control, authentication protocols and regular security audits are essential to safeguarding digital twins and the systems they represent from malicious attacks and unauthorized access.

Robust authentication protocols represent another key implementation priority. As mentioned earlier, any gap in security along what could be a global network of designers and suppliers poses a potential risk to project integrity or even national security. This is why a strong access management paradigm must be in place, ideally fortified with software guard extensions (SGX) that create protected enclaves for data, and trust domain extensions (TDX) that expand these enclaves to trusted third parties.

Throughout, government transformation teams looking to implement digital twins should prioritize solutions that can integrate with existing infrastructure and systems. This is necessary in government settings where funding limitations or continuity of critical infrastructure operations make legacy systems or components unavoidable. By iteratively adding compute resources strategically and cost-effectively to legacy systems, agencies can scale the digital twin deployment overtime on a realistic path toward progressively larger and more demanding use cases.

Conclusion

AI-enabled digital twins are yielding powerful benefits for government teams in charge of designing and maintaining critical infrastructure, including faster process optimization, more situational awareness and stronger predictive capabilities. Furthermore, when agencies prioritize an incremental approach that strategically incorporates legacy assets and is driven by a clear implementation plan, the outcome is a virtuous cycle of ongoing mission success for agencies and ongoing value for taxpayers and citizens.

Burnie Legette, Director of IOT and Artificial Intelligence at Intel.

The post AI-enabled digital twins are transforming government critical infrastructure first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/07/ai-enabled-digital-twins-are-transforming-government-critical-infrastructure/feed/ 0
Resolving federal hybrid cloud challenges with AI and automation https://federalnewsnetwork.com/commentary/2024/07/resolving-federal-hybrid-cloud-challenges-with-ai-and-automation/ https://federalnewsnetwork.com/commentary/2024/07/resolving-federal-hybrid-cloud-challenges-with-ai-and-automation/#respond Tue, 09 Jul 2024 14:36:40 +0000 https://federalnewsnetwork.com/?p=5068192 As government networks load up on new data and applications, gaining visibility over modern IT estates has become more difficult than ever.

The post Resolving federal hybrid cloud challenges with AI and automation first appeared on Federal News Network.

]]>
Federal agencies are modernizing aggressively, driving the addition of new systems and capabilities and creating increasingly diverse hybrid cloud ecosystems. While such modernization is necessary to keep up with growing service mandates and citizen expectations, the complexity that arises from these hybrid cloud architectures poses significant challenges in orchestrating and monitoring government IT systems.

To solve this conundrum, federal IT leaders must lean into artificial intelligence and automation to better manage their complex IT environments. When supported by a strong data management foundation, this combination can deliver enhanced service-level visibility and control for government IT teams in charge of ever-changing hybrid cloud architectures.

Hybrid cloud brings challenges of complexity and scale

As government networks load up on new data and applications, gaining visibility over modern IT estates has become more difficult than ever. Rather than adopt a single cloud service from a single cloud provider, agencies are embracing a wide range of cloud vendors and approaches. This can leave teams, who may already be understaffed and swimming in technical debt, siloed and struggling further to manage a workload-intensive mix of legacy and modern applications and infrastructure.

This dramatic proliferation of operational complexity is fueled by massive increases in the volume, variety and velocity of data to be managed. Additionally, IT platforms are often not accessible, understandable or usable for many user-level government workers who need to collaborate on them. The picture is further complicated by the fact that not all workloads are moving to the cloud and by the persistence of legacy monitoring tools that aren’t able to keep up with the variety and velocity of data across hybrid cloud architectures.

All these factors contribute to an unsustainable scenario of outdated tools and disjointed processes that stifles IT’s ability to respond to spiraling complexity and keep up with evolving agency and end user expectations. Fortunately, government IT teams can overcome these obstacles by making strategic use of both AI and automation to progress towards a state of autonomic IT and bring more visibility and control to their hybrid cloud architectures.

Overcoming hybrid cloud complexity with AI plus automation

To make sense of the current state of hybrid cloud complexity and better meet key mission objectives, federal IT teams must opt for a modern approach to ITOps that combines AI and automation to create a more unified service view across the entire hybrid cloud universe. This includes all data center, public cloud – software-as-a-service, infrastructure-as-a-service and platform-as-a-service — and private cloud environments.

The combination of AI and automation is crucial to driving observability across each of these environments, applying machine learning and scalable process optimization throughout all hybrid infrastructure data and systems. This empowers staff to perfect and then automate routine operational tasks, such as collecting diagnostic data, exchanging real-time operational data between systems and platforms, executing ticketing, remediation workflows and more.

The most successful deployments combine a wide range of data across environments to establish a real-time operational data lake. This makes it possible for IT teams to analyze and act on the data at “cloud scale” while applying a rich set of analytical techniques to add business service context and meaning to the data – with multi-directional workflows for both proactive and responsive actions.

Facilitating AI and automation with stronger data management techniques

While there is no single blueprint to follow for applying AI and automation for more alignment and orchestration of agencies’ hybrid cloud environments, the most successful efforts make sure to prioritize the underlying integrity of data. The right data management foundation will allow AI to properly manage, model and analyze operations, and this foundation is also essential to optimize and scale processes with automation.

In particular, federal IT teams should pursue three essential data-related priorities to support the journey to complete visibility and autonomous IT operations. To begin with, data must be of high fidelity, meaning it’s critical to collect the right types of data from the right sources in order to accurately reflect the state of what’s happening with an agency’s IT and business services at any given time. In addition, the cleaning, analyzing and acting on data must happen in real-time – ideally via automated processes and closed-loop decision making to enable action quickly without the need for a human analyst to be involved.

Throughout, data must be thoroughly contextualized, with all metadata and asset dependencies clearly defined through a service oriented view that enhances the ability to understand operational patterns and identify anomalies or performance issues. The right platform for AI and automation will include capabilities for managing data in these ways, enabling teams to cut through the noise and quickly establish the impact and root causes of issues. This, in turn, sets the broader stage for fundamental IT and agency transformation toward stronger agility, speed and growth.

As governments become increasingly digitized, many agencies struggle to manage their integrated hybrid-cloud environments. Fortunately, the right combination of AI and automation founded on the right data management techniques can bring more visibility and control to these environments. As a result, federal IT teams can conduct faster root cause analysis, reduce downtime, optimize IT investments, and provide a more stable foundation to support broader agency modernization efforts as technology continues to advance.

Lee Koepping is senior director for global sales engineering at ScienceLogic.

The post Resolving federal hybrid cloud challenges with AI and automation first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/07/resolving-federal-hybrid-cloud-challenges-with-ai-and-automation/feed/ 0
AI ‘expressions of interest’ flood into TMF Board https://federalnewsnetwork.com/artificial-intelligence/2024/07/ai-expressions-of-interest-flood-into-tmf-board/ https://federalnewsnetwork.com/artificial-intelligence/2024/07/ai-expressions-of-interest-flood-into-tmf-board/#respond Thu, 04 Jul 2024 16:36:20 +0000 https://federalnewsnetwork.com/?p=5064243 Harrison Smith, a member of the Technology Modernization Fund board, said the streamlined proposal process has helped agencies submit more AI proposals.

The post AI ‘expressions of interest’ flood into TMF Board first appeared on Federal News Network.

]]>
var config_5064302 = {"options":{"theme":"hbidc_default"},"extensions":{"Playlist":[]},"episode":{"media":{"mp3":"https:\/\/www.podtrac.com\/pts\/redirect.mp3\/traffic.megaphone.fm\/HUBB8093533675.mp3?updated=1720108920"},"coverUrl":"https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2023\/12\/3000x3000_Federal-Drive-GEHA-150x150.jpg","title":"AI \u2018expressions of interest\u2019 flood into TMF Board","description":"[hbidcpodcast podcastid='5064302']nnThe Technology Modernization Fund Board\u2019s $18 million investment in the State Department\u2019s generative artificial intelligence program is just scratching the surface.nnThe board is expecting a rush of proposals for AI projects, particularly those that are under $6 million dollars or under 18 months in total length.nnHarrison Smith, a member of the Technology Modernization Fund board, said since the <a href="https:\/\/federalnewsnetwork.com\/artificial-intelligence\/2024\/02\/tmf-seeks-agency-proposals-to-accelerate-ai-rollout-across-government\/">call went out to agencies<\/a> earlier this year for AI proposals or ideas, the board has received about 100 \u201cexpressions of interest.\u201dnn[caption id="attachment_2409047" align="alignright" width="478"]<img class="wp-image-2409047 size-full" src="https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2019\/08\/harrisonsmith-IRS.jpg" alt="" width="478" height="262" \/> Harrison Smith is a member of the TMF Board.[\/caption]nn\u201cThe one piece that's a little, a little different there is that the board has allowed for streamlined expressions of interest from agencies. We have to work through the process, reach out and talk to the entities,\u201d Smith said in an interview with Federal News Network. \u201cBut honestly, the administration has been very clear. We have an obligation in the TMF and, as part of that, we have to the harness the power of artificial intelligence for good while protecting people from its risks. I believe strongly that the TMF is one of the ways to do that.\u201dnnThe board\u2019s streamlined expressions of interest approach, as well as the changes to the repayment structure, has caused a 10-fold increase in the number of proposals agencies submitted to the board.nnSmith said part of the reason is the $1 billion in funding the TMF received from the American Rescue Plan Act, but another part is because the board and program management office has done more to educate and help agencies.n<h2>New\u00a0 executive director for TMF office<\/h2>nTo that end, the General Services Administration named Larry Bafundo as the permanent executive director of the TMF PMO yesterday.nnKaty Kale, GSA\u2019s deputy administrator, announced his promotion to staff in an email that highlighted his \u201cthoughtful and strategic leadership that has set up the TMF team for future success.\u201dnnBafundo returned to GSA in January to be deputy executive director and acting executive director of the TMF program management office, replacing Raylene Yung. Kale said in the email obtained by Federal News Network, that Bafundo has provided. The TMF Board has made nine awards worth more than $168 million since January.nnThe <a href="https:\/\/www.gsa.gov\/about-us\/newsroom\/news-releases\/technology-modernization-fund-announces-investment-06182024">latest awards<\/a> went to the Federal Election Commission for $8.8 million to modernize its FECFile applications, which is running on software from 1997, the Interior Department\u2019s Bureau of Indian Education for $5.86 million to modernize the websites and other online tools for BIE-funded schools in Tribal communities, and to the Energy Department for $17 million to modernize its human resources IT systems by moving to a software-as-a-service platform.nnWhile none of these three awards focus on AI, the board expects to continue to review and award proposals seeking to implement the emerging technology.n<h2>Educating agencies on AI proposals<\/h2>nThat is why <a href="https:\/\/federalnewsnetwork.com\/it-modernization\/2024\/05\/state-noaa-education-win-new-it-modernization-money\/">State\u2019s award in May<\/a> is expected to be the first of several.nn\u201cThe TMF call for AI and GenAI proposals, specifically calls out mission enabling approaches. This idea of we want to be able to test in certain areas to understand what might actually work, but if you can get to actual use cases that are helpful, like in the Department of State's instance, it's a great thing. It really drives the mission and enabling aspect of technology,\u201d Smith said. \u201cEveryone likes that flashy tool, but one that actually helps the Department of State actually just go through and operate its more than 270 diplomatic posts worldwide where there is a ton of data that comes in is really the question that the proposal answers. How is the Department of State going to be able to empower its global staff to work faster and easier, and with better information?\u201dnnOne way the board is trying to refine agency proposals, and especially for those in the AI area, is through holding \u201coffice hours.\u201dnnSmith said agencies submit an \u201cexpression of interest,\u201d which is an email about how they want to use the capabilities.nn\u201cThen you have an opportunity to talk to the TMF PMO. That's an area where the board and the PMO have really started to lean in on because those conversations about \u2018Hey, could you make it look like this?\u2019 or \u2018Hey, what about that?\u2019 and \u2018We need this type of repayment.\u2019 Those have reinforced what we are trying to do,\u201d he said. \u201cI personally have spent a good amount of time talking to folks about how are you going to make this work, what are your procurement challenges, is this an existing procurement or are you going to try to do a new one? How are you going to engage with industry to make sure you've got the best outcomes? There's a lot there already but we've really continued to lean into that because it's shown a lot of benefits based on our feedback from the agencies.\u201d"}};

The Technology Modernization Fund Board’s $18 million investment in the State Department’s generative artificial intelligence program is just scratching the surface.

The board is expecting a rush of proposals for AI projects, particularly those that are under $6 million dollars or under 18 months in total length.

Harrison Smith, a member of the Technology Modernization Fund board, said since the call went out to agencies earlier this year for AI proposals or ideas, the board has received about 100 “expressions of interest.”

Harrison Smith is a member of the TMF Board.

“The one piece that’s a little, a little different there is that the board has allowed for streamlined expressions of interest from agencies. We have to work through the process, reach out and talk to the entities,” Smith said in an interview with Federal News Network. “But honestly, the administration has been very clear. We have an obligation in the TMF and, as part of that, we have to the harness the power of artificial intelligence for good while protecting people from its risks. I believe strongly that the TMF is one of the ways to do that.”

The board’s streamlined expressions of interest approach, as well as the changes to the repayment structure, has caused a 10-fold increase in the number of proposals agencies submitted to the board.

Smith said part of the reason is the $1 billion in funding the TMF received from the American Rescue Plan Act, but another part is because the board and program management office has done more to educate and help agencies.

New  executive director for TMF office

To that end, the General Services Administration named Larry Bafundo as the permanent executive director of the TMF PMO yesterday.

Katy Kale, GSA’s deputy administrator, announced his promotion to staff in an email that highlighted his “thoughtful and strategic leadership that has set up the TMF team for future success.”

Bafundo returned to GSA in January to be deputy executive director and acting executive director of the TMF program management office, replacing Raylene Yung. Kale said in the email obtained by Federal News Network, that Bafundo has provided. The TMF Board has made nine awards worth more than $168 million since January.

The latest awards went to the Federal Election Commission for $8.8 million to modernize its FECFile applications, which is running on software from 1997, the Interior Department’s Bureau of Indian Education for $5.86 million to modernize the websites and other online tools for BIE-funded schools in Tribal communities, and to the Energy Department for $17 million to modernize its human resources IT systems by moving to a software-as-a-service platform.

While none of these three awards focus on AI, the board expects to continue to review and award proposals seeking to implement the emerging technology.

Educating agencies on AI proposals

That is why State’s award in May is expected to be the first of several.

“The TMF call for AI and GenAI proposals, specifically calls out mission enabling approaches. This idea of we want to be able to test in certain areas to understand what might actually work, but if you can get to actual use cases that are helpful, like in the Department of State’s instance, it’s a great thing. It really drives the mission and enabling aspect of technology,” Smith said. “Everyone likes that flashy tool, but one that actually helps the Department of State actually just go through and operate its more than 270 diplomatic posts worldwide where there is a ton of data that comes in is really the question that the proposal answers. How is the Department of State going to be able to empower its global staff to work faster and easier, and with better information?”

One way the board is trying to refine agency proposals, and especially for those in the AI area, is through holding “office hours.”

Smith said agencies submit an “expression of interest,” which is an email about how they want to use the capabilities.

“Then you have an opportunity to talk to the TMF PMO. That’s an area where the board and the PMO have really started to lean in on because those conversations about ‘Hey, could you make it look like this?’ or ‘Hey, what about that?’ and ‘We need this type of repayment.’ Those have reinforced what we are trying to do,” he said. “I personally have spent a good amount of time talking to folks about how are you going to make this work, what are your procurement challenges, is this an existing procurement or are you going to try to do a new one? How are you going to engage with industry to make sure you’ve got the best outcomes? There’s a lot there already but we’ve really continued to lean into that because it’s shown a lot of benefits based on our feedback from the agencies.”

The post AI ‘expressions of interest’ flood into TMF Board first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/artificial-intelligence/2024/07/ai-expressions-of-interest-flood-into-tmf-board/feed/ 0
Army faces data overload but LLMs are not the answer https://federalnewsnetwork.com/army/2024/07/army-faces-data-overload-but-llms-are-not-the-answer/ https://federalnewsnetwork.com/army/2024/07/army-faces-data-overload-but-llms-are-not-the-answer/#respond Wed, 03 Jul 2024 18:26:01 +0000 https://federalnewsnetwork.com/?p=5063324 "Everybody who's acquiring AI from the commercial world — demand to see where the data came from. And don't stop until they tell you," said Stephen Riley.

The post Army faces data overload but LLMs are not the answer first appeared on Federal News Network.

]]>
Army leaders and soldiers are inundated with data — the sheer volume of information is hindering their decision-making and causing analysis paralysis. But turning to Chat GPT-like tools to help commanders get after this problem might not be the answer.

“Ninety percent of the time, don’t do it. It’s the easy button. But using [large language models] like Chat GPT or Gemini — that is boiling the ocean to make yourself a cup of coffee. You don’t have the compute resources to run effective LLMs down at the tactical edge,” Stephen Riley, who is part of the  Army engineering team at Google, said during an Association of the U.S. Army event Tuesday.

The Army generates a vast amount of data due to its large number of personnel and extensive range of operations, making the service one of the largest AI users among the military branches. But having a lot of data does not mean Army leaders can get actionable insights from it.

“I say there’s too much damn data out there. We can’t overload our warfighters and our leaders with too much data,” said Young Bang, the principal deputy assistant secretary of the Army for acquisition, logistics and technology.

Google, for example, improved the quality of search results long before the advent of large language models, and the Army could apply similar methods to how it handles its large swaths of data, said Riley. 

One way the tech giant worked to improve search results was by analyzing which search results were clicked on most often and identifying which results were most useful to the most users.

Additionally, the company developed a knowledge graph that “represents  widely accepted truths and relationships.” This approach helps ground search results in established knowledge, which subsequently requires less computational power than LLMs.

“Now we’ve got two things working in tandem. We’ve got what’s been most useful to the most people and we’ve got what is actually a good result because it conforms with generally accepted truth. All of this doesn’t require LLMs. So how do we do this with the Army? Let’s start building a knowledge graph of things that are true for the Army, said Riley.

“We don’t need to train a gigantic LLM with all of the ADPs and FMs and say, ‘All right, we’ve got a model. You could actually encode all of those ADPs, all the operations stuff, all the intel stuff — we could encode that into a knowledge graph, which requires infinitely less compute power. That’s something you could deploy forward on a pretty small box. I encourage everybody to look first at the old ways of doing things. They tend to be more efficient. I got to think a little harder about how to implement them. But it’s a lot more efficient and it’s very doable.”

Bang said that while LLMs are useful for general purposes, using them in combination with small language models for specific military terms, military jargon, cyber terms or other specific languages would provide better results for soldiers. 

“Do you really need LLMs and SLMs at the edge? No. If you use that and overlay a knowledge graph, I think that’s a much better practical implementation of things. Because we can’t afford all the computing resources that we’re going to need to process all that or do the training on it or even the retraining or the inference at the edge, said Bang. 

But the concern is that malicious actors can potentially overload existing data sets with misinformation, which would lead to a shift in what’s considered a commonly accepted truth or knowledge. Riley said that’s why it’s important to have humans in the loop. “We cannot abdicate human reasoning to the machines.”

“You could theoretically overload it and start shifting truth in on a given access to some degree. But as we index stuff, the data that we index is also run through the current knowledge graph. But we also have humans in the loop; we are watching what’s going on with the trends, with the shifting of the Overton window there, said Riley.

Poisoned datasets

When using AI datasets, particularly for training large language models, malicious actors don’t have to poison the whole dataset. Compromising even a small piece of a server will introduce bad data that will contaminate the overall training dataset. That’s why the military services acquiring AI models and data sets from the commercial world should “demand to see where the data came from.”

“Google ain’t going to tell you. Demand it of us anyway. Microsoft ain’t going tell you. Demand it anyway. We have already seen cases where companies building large LLMs have sourced data from other companies that say they have a bunch of data. And it turns out they source from other companies that are given some pretty bad stuff. Maybe not deliberate misinformation, but stuff that absolutely would not comply with our nation or Army values.  In all cases, demand to see where that data came from. And don’t stop until they tell you, said Riley.

“We’ve talked about this data bill of materials. Famously, after Solar Winds, people are asking for a software bill of materials. We must develop some kind of data bill of materials and make that it’s a standard part of acquisition of these AI systems. We’ve got to do it because we’re already seeing this problem whether you know it or not.”

The post Army faces data overload but LLMs are not the answer first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/army/2024/07/army-faces-data-overload-but-llms-are-not-the-answer/feed/ 0
Intelligence community pushes for ‘AI at scale’ under new IT roadmap https://federalnewsnetwork.com/inside-ic/2024/07/intelligence-community-pushes-for-ai-at-scale-under-new-it-roadmap/ https://federalnewsnetwork.com/inside-ic/2024/07/intelligence-community-pushes-for-ai-at-scale-under-new-it-roadmap/#respond Tue, 02 Jul 2024 19:27:53 +0000 https://federalnewsnetwork.com/?p=5062104 The intelligence community is also pursuing initiatives in cloud computing, data management, zero trust cybersecurity and quantum-resistant encryption.

The post Intelligence community pushes for ‘AI at scale’ under new IT roadmap first appeared on Federal News Network.

]]>
var config_5056532 = {"options":{"theme":"hbidc_default"},"extensions":{"Playlist":[]},"episode":{"media":{"mp3":"https:\/\/www.podtrac.com\/pts\/redirect.mp3\/traffic.megaphone.fm\/HUBB5845872464.mp3?updated=1719521317"},"coverUrl":"https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2024\/02\/Inside-the-IC-3000x3000-podcast-tile-Booz-Allen-150x150.jpg","title":"The intelligence community has a big new tech strategy","description":"[hbidcpodcast podcastid='5056532']nnThe intelligence community\u2019s new IT roadmap lays out a plan to pursue artificial intelligence \u201cat scale,\u201d as IC technology leaders develop guidance for AI standards and services.nnThe Office of the Director of National Intelligence published the <a href="https:\/\/www.odni.gov\/files\/documents\/CIO\/IC-IT-Roadmap-Vision-For-the-IC-Info-Environment-May2024.pdf" target="_blank" rel="noopener">roadmap<\/a>, \u201cVision for the IC Information Environment,\u201d late last month. In an exclusive interview, IC Chief Information Officer Adelle Merritt said the roadmap calls for \u201cbold and transformational investments\u201d in technology. She said the roadmap was developed in coordination with all 18 elements of the intelligence community.nn\u201cThis roadmap really provides a unified vision for where the IC needs to go over the next five years,\u201d Merritt said on <a href="https:\/\/federalnewsnetwork.com\/shows\/inside-the-ic-podcast\/" target="_blank" rel="noopener">Inside the IC.<\/a>nnThe strategy makes clear that officials believe AI is poised to \u201ctransform the IC\u2019s mission.\u201d It describes several efforts to advance \u201cAI at scale\u201d through 2030.nn\u201cSecure, generative, and predictive AI can reduce the time for intelligence insights from days or weeks to mere seconds,\u201d the document states.nnBy fiscal 2025, intelligence community officials will develop enterprise guidance for AI, including standards, use policies and architectures, to guide how intelligence agencies adopt the technology. The IC\u2019s recently designated chief AI officer is also leading the development <a href="https:\/\/federalnewsnetwork.com\/artificial-intelligence\/2024\/04\/intelligence-community-gets-a-chief-ai-officer\/" target="_blank" rel="noopener">of a new IC-wide AI strategy.<\/a>nnThe roadmap also shows that between fiscal 2026 and 2029, officials plan to establish \u201cAI enabling services at scale,\u201d including a model repository and training data.nnMerritt said ODNI officials need to move quickly with their guidance to keep up with the rapidly evolving state of AI.nn\u201cIt is critically important that we focus on getting this out and not let it languish, because things are moving on,\u201d she said. \u201cThe world has started to adopt this. And it's a really exciting capability.\u201dnnAt the same time, Merritt emphasized that the IT roadmap\u2019s five focus areas and 19 initiatives can\u2019t be done in isolation.nn\u201cIt is a collection of things that all must be done,\u201d she said. \u201cIt's not something that's ala carte, that you can pick and choose what you decide you want to work on.\u201dn<h2>\u2018Optimizing\u2019 the IC\u2019s cloud<\/h2>nThe intelligence community\u2019s successful use of AI will in large part depend on other elements of the roadmap, including cloud computing, <a href="https:\/\/federalnewsnetwork.com\/inside-ic\/2023\/07\/intel-communitys-new-data-strategy-looks-to-lay-foundations-of-ai-future\/" target="_blank" rel="noopener">data management<\/a> and cybersecurity.nn\u201cAs a CIO, when I hear about AI, I quickly think, you're going to need a lot of data in order to do AI,\u201d Merritt said. \u201cAnd to have all that data, I'm going to need to store it. I\u2019m also going to need to process it. And I'm going to need to move it around from where I get it to where the users are. So when I hear AI as a CIO, I'm thinking, storage, compute and transport.\u201dnnThe roadmap lays out a key initiative to \u201coptimize\u201d the intelligence community\u2019s use of the cloud. Intelligence agencies had initially adopted cloud infrastructure using Amazon Web Services under the CIA\u2019s \u201cC2S\u201d contract. But agencies are now moving to the CIA\u2019s \u201cC2E\u201d contract, which includes five major cloud vendors.nnMerritt says four of the major cloud providers have now received an authority-to-operate on the IC\u2019s classified networks.nn\u201cSo we now have some of the best cloud capability on the planet available to us, and so making sure that we continue to nurture that infrastructure underneath upon which all the amazing capabilities will be added,\u201d Merritt said.nnIn fiscal 2025, the roadmap describes how the intelligence community will develop \u201ca tool, methodology, or process to help IC elements determine which approach and service provider would be most appropriate to meet their individual requirements.\u201dnnMerritt said a multi-vendor cloud environment is \u201ccritical\u201d for the ICnn\u201cIt is critically important that we turn the different capabilities that each of these unique cloud service providers have and turn them into mission advantage, and not just resort to the lowest common denominator,\u201d she said. \u201cAnd so much as we learned how to operate in a single cloud environment, we are now turning our attention to learn how to operate and thrive in a multiple cloud environment.\u201dn<h2>Zero trust steering committee<\/h2>nThe roadmap also homes in \u201crobust cybersecurity\u201d as a key focus area. And the IC\u2019s strategy for zero trust largely lines up with <a href="https:\/\/federalnewsnetwork.com\/defense-main\/2024\/04\/dod-to-automate-assessment-of-zero-trust-implementation-plans\/" target="_blank" rel="noopener">the Defense Department\u2019s timelines for adopting the security architecture.<\/a>nnThe strategy states the intelligence community will achieve a \u201cbasic\u201d level of zero trust maturity by Sept. 30, 2025, and an \u201cintermediate\u201d state by Sept. 30, 2027.nnMerritt said the IC has also established a \u201czero trust steering committee\u201d to guide those efforts. The committee includes officials from all 18 elements of the intelligence community.nn\u201cSome of our elements have done some amazing things on their zero trust journey, and they have been very willing to share,\u201d she said. \u201cSo we've had some technical exchanges where we brought in subject matter experts in a specific area invited technical experts from across the elements to learn and to ask questions, so we can accelerate our journey by sharing our knowledge.\u201dnnMeanwhile, the roadmap also highlights the move to post-quantum cryptography. \u201cCryptographic security in a post-quantum world will be pivotal for safeguarding data and digital communications,\u201d the document states. \u201cThis includes the development and deployment of advanced cryptographic algorithms designed to be secure against threats from quantum computers, both in commercially available and government devices.\u201dnnBy fiscal 2027, the intelligence community plans to deploy quantum-resistant cryptography solutions \u201cto bolster the confidentiality of IC networks and transport services,\u201d the plan shows.nnMerritt said the IC is working on the plan for deploying quantum-resistant algorithms in the coming years.nn\u201cIt is important that we do this in a deliberative, thoughtful way, because whenever you start to change technology, you do open up some risk,\u201d she said. \u201cAnd so when we talk about this as being a race, we can't be moving so fast that we get sloppy on this.\u201d"}};

The intelligence community’s new IT roadmap lays out a plan to pursue artificial intelligence “at scale,” as IC technology leaders develop guidance for AI standards and services.

The Office of the Director of National Intelligence published the roadmap, “Vision for the IC Information Environment,” late last month. In an exclusive interview, IC Chief Information Officer Adelle Merritt said the roadmap calls for “bold and transformational investments” in technology. She said the roadmap was developed in coordination with all 18 elements of the intelligence community.

“This roadmap really provides a unified vision for where the IC needs to go over the next five years,” Merritt said on Inside the IC.

The strategy makes clear that officials believe AI is poised to “transform the IC’s mission.” It describes several efforts to advance “AI at scale” through 2030.

“Secure, generative, and predictive AI can reduce the time for intelligence insights from days or weeks to mere seconds,” the document states.

By fiscal 2025, intelligence community officials will develop enterprise guidance for AI, including standards, use policies and architectures, to guide how intelligence agencies adopt the technology. The IC’s recently designated chief AI officer is also leading the development of a new IC-wide AI strategy.

The roadmap also shows that between fiscal 2026 and 2029, officials plan to establish “AI enabling services at scale,” including a model repository and training data.

Merritt said ODNI officials need to move quickly with their guidance to keep up with the rapidly evolving state of AI.

“It is critically important that we focus on getting this out and not let it languish, because things are moving on,” she said. “The world has started to adopt this. And it’s a really exciting capability.”

At the same time, Merritt emphasized that the IT roadmap’s five focus areas and 19 initiatives can’t be done in isolation.

“It is a collection of things that all must be done,” she said. “It’s not something that’s ala carte, that you can pick and choose what you decide you want to work on.”

‘Optimizing’ the IC’s cloud

The intelligence community’s successful use of AI will in large part depend on other elements of the roadmap, including cloud computing, data management and cybersecurity.

“As a CIO, when I hear about AI, I quickly think, you’re going to need a lot of data in order to do AI,” Merritt said. “And to have all that data, I’m going to need to store it. I’m also going to need to process it. And I’m going to need to move it around from where I get it to where the users are. So when I hear AI as a CIO, I’m thinking, storage, compute and transport.”

The roadmap lays out a key initiative to “optimize” the intelligence community’s use of the cloud. Intelligence agencies had initially adopted cloud infrastructure using Amazon Web Services under the CIA’s “C2S” contract. But agencies are now moving to the CIA’s “C2E” contract, which includes five major cloud vendors.

Merritt says four of the major cloud providers have now received an authority-to-operate on the IC’s classified networks.

“So we now have some of the best cloud capability on the planet available to us, and so making sure that we continue to nurture that infrastructure underneath upon which all the amazing capabilities will be added,” Merritt said.

In fiscal 2025, the roadmap describes how the intelligence community will develop “a tool, methodology, or process to help IC elements determine which approach and service provider would be most appropriate to meet their individual requirements.”

Merritt said a multi-vendor cloud environment is “critical” for the IC

“It is critically important that we turn the different capabilities that each of these unique cloud service providers have and turn them into mission advantage, and not just resort to the lowest common denominator,” she said. “And so much as we learned how to operate in a single cloud environment, we are now turning our attention to learn how to operate and thrive in a multiple cloud environment.”

Zero trust steering committee

The roadmap also homes in “robust cybersecurity” as a key focus area. And the IC’s strategy for zero trust largely lines up with the Defense Department’s timelines for adopting the security architecture.

The strategy states the intelligence community will achieve a “basic” level of zero trust maturity by Sept. 30, 2025, and an “intermediate” state by Sept. 30, 2027.

Merritt said the IC has also established a “zero trust steering committee” to guide those efforts. The committee includes officials from all 18 elements of the intelligence community.

“Some of our elements have done some amazing things on their zero trust journey, and they have been very willing to share,” she said. “So we’ve had some technical exchanges where we brought in subject matter experts in a specific area invited technical experts from across the elements to learn and to ask questions, so we can accelerate our journey by sharing our knowledge.”

Meanwhile, the roadmap also highlights the move to post-quantum cryptography. “Cryptographic security in a post-quantum world will be pivotal for safeguarding data and digital communications,” the document states. “This includes the development and deployment of advanced cryptographic algorithms designed to be secure against threats from quantum computers, both in commercially available and government devices.”

By fiscal 2027, the intelligence community plans to deploy quantum-resistant cryptography solutions “to bolster the confidentiality of IC networks and transport services,” the plan shows.

Merritt said the IC is working on the plan for deploying quantum-resistant algorithms in the coming years.

“It is important that we do this in a deliberative, thoughtful way, because whenever you start to change technology, you do open up some risk,” she said. “And so when we talk about this as being a race, we can’t be moving so fast that we get sloppy on this.”

The post Intelligence community pushes for ‘AI at scale’ under new IT roadmap first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/inside-ic/2024/07/intelligence-community-pushes-for-ai-at-scale-under-new-it-roadmap/feed/ 0
DoD’s Joint Staff GenAI sprint lays out 8 internal use cases https://federalnewsnetwork.com/defense-main/2024/07/dods-joint-staff-genai-sprint-lays-out-8-internal-use-cases/ https://federalnewsnetwork.com/defense-main/2024/07/dods-joint-staff-genai-sprint-lays-out-8-internal-use-cases/#respond Mon, 01 Jul 2024 14:22:29 +0000 https://federalnewsnetwork.com/?p=5060021 Lt. Gen. Todd Isaacson, the CIO for the Joint Chiefs of Staff/J-6, said a review team is completing recommendations for leadership around using GenAI.

The post DoD’s Joint Staff GenAI sprint lays out 8 internal use cases first appeared on Federal News Network.

]]>
var config_5060098 = {"options":{"theme":"hbidc_default"},"extensions":{"Playlist":[]},"episode":{"media":{"mp3":"https:\/\/www.podtrac.com\/pts\/redirect.mp3\/traffic.megaphone.fm\/HUBB5496187594.mp3?updated=1719842768"},"coverUrl":"https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2023\/12\/3000x3000_Federal-Drive-GEHA-150x150.jpg","title":"DoD\u2019s Joint Staff GenAI sprint lays out 8 internal use cases","description":"[hbidcpodcast podcastid='5060098']nnBALTIMORE \u2014 The Defense Department\u2019s Joint Chiefs of Staff is getting on the artificial intelligence bandwagon.nnThe J6 recently completed a review of potential generative AI use cases to improve internal processes and now are deciding the next steps toward implementation.nnLt. Gen. Todd Isaacson, the director for command, control, communications, computers and cyber and the chief information officer for the <a href="https:\/\/www.jcs.mil\/" target="_blank" rel="noopener">Joint Chiefs of Staff\/J-6<\/a>, said at the <a href="https:\/\/events.afcea.org\/AFCEACyber24\/Public\/enter.aspx" target="_blank" rel="noopener">AFCEA TechNet Cyber conference<\/a> that the 90-day sprint looked at commercial generative AI tools and large language models that are already available to improve internal processes, such as a task or contract award.nn[caption id="attachment_5060030" align="alignright" width="345"]<img class="wp-image-5060030" src="https:\/\/federalnewsnetwork.com\/wp-content\/uploads\/2024\/07\/Todd-Isaacson-245x300.jpg" alt="" width="345" height="423" \/> Lt. Gen. Todd Isaacson is the director for command, control, communications, computers and cyber and the chief information officer for the Joint Chiefs of Staff\/J-6.[\/caption]nn\u201cThe second purpose was to then determine how we could potentially organize the Joint Staff and determine whether or not we wanted to stand up a Joint Staff chief data and AI officer (CDAO), which we have not yet determined, but is informed by the work that was done,\u201d Isaacson said in an interview with Federal News Network. \u201cThen, how do we endure the kinds of use cases that we had already put tools and capabilities in place? The final piece is how do we train teammates that when they come on board to utilize generative AI tools that are available, and institutionalize those? That's the big idea behind the sprint. Now we're in the reflection phase and getting ready to report back to leadership to determine what were the next set of a series of steps.\u201dnnIsaacson said the review team started with four use cases, but ended up completing eight as interest and excitement over the effort grew. He said the team has another 10 waiting for review. He added when the call went out for volunteers for the task force, the response was overwhelming, showing how much interest there is to use the GenAI tools.nnThe use cases were focused on <a href="https:\/\/federalnewsnetwork.com\/defense-news\/2023\/11\/dods-new-ai-strategy-focuses-on-adoption\/">how the GenAI tools<\/a> could improve how action officers conduct analysis of large volumes of information, particularly historical financial, personnel and logistics data.nn\u201cThere's a lot of information to cull through so can you leverage LLMs to cull down the important components to maintain situational awareness on big items like that?\u201d he said. \u201cThose were largely internal. We didn't use it for an intelligence function. We did it largely for internal joint staff keeping the conveyor belt moving kind of processes.\u201dn<h2>Is a Joint Staff CDAO in the works?<\/h2>nSeveral military agencies and services are starting to take a deeper look at GenAI. The Air Force, for example, in May <a href="https:\/\/federalnewsnetwork.com\/defense-main\/2024\/06\/air-force-unveils-new-generative-ai-platform\/">opened up a GenAI platform<\/a> to airmen and civilians to use. Called the Non-classified Internet Protocol Generative Pre-training Transformer (NIPRGPT), the service hopes it can help with tasks such as coding, correspondence and content summarization, all on the service\u2019s unclassified networks.nnAt the same time, Army CIO Leo Garciga is <a href="https:\/\/federalnewsnetwork.com\/cloud-computing\/2024\/03\/dod-cloud-exchange-2024-armys-leo-garciga-on-clearing-obstacles-to-digital-transformation\/">developing a new policy<\/a> around GenAI and LLMs with a focus on data protection and the creation of guardrails for the interaction between the government and industry.nnOver the next 60 days, the J6 review team will create recommendations for the J-6 leaderships to decide next steps.nnIsaacson said the recommendations also will include whether creating a CDAO would be beneficial for the organization. DoD <a href="https:\/\/federalnewsnetwork.com\/defense-main\/2021\/12\/pentagon-to-reshuffle-leadership-roles-for-ai-data-digital-services\/">stood up its CDAO<\/a> in February 2022.nn\u201cWe're going to have a conversation about if we want to organize ourselves in a different way to include an organic CDAO and whether or not we would find that valuable,\u201d he said. \u201cInto the fall, we would then be able to gain momentum based upon the decisions that are made. Currently, we have a CIO, which I perform. We have a CDO, which belongs to the Joint Staff\/J6. That\u2019s our current organization, but we saw the opportunity of the GenAI Task Force to take ourselves to task to determine through learning and doing if we wanted to establish a Joint Staff CDAO or not. It might be, we can't make the investment because we're not getting any larger. It might be, \u2018hey, we definitely want to invest in it and this is how we're going to do it.\u2019 But nothing has been predetermined in that regard.\u201dnnThe GenAI effort is part of a <a href="https:\/\/federalnewsnetwork.com\/defense-news\/2023\/12\/to-build-network-that-works-with-allies-indopacom-starts-from-scratch-with-zero-trust\/">broader digital transformation<\/a> effort across the J6.nnIsaacson said his office issued a digital transformation campaign plan that outlines four levels of effort.n<ul>n \t<li>People<\/li>n \t<li>Infrastructure<\/li>n \t<li>Tools and capabilities<\/li>n \t<li>Rapid adoption<\/li>n<\/ul>n\u201cThe people piece is how do we develop, maintain and attract a digitally-enabled workforce. This is something that we are supremely focused on, and we appreciate the insights that industry could help us to make our pursuits more attractive in terms of best practice,\u201d he said. \u201cThe second is the infrastructure. We partner very, very closely with the Defense Information Systems Agency, Joint Force Headquarters, DoD Information Network (JTHQ-DoDIN) partners to set the theater, set the conditions and set the enterprise. We rely very heavily as a joint force on that enterprise, and the investments for this data-centric pivot is laying a burden on the enterprise and the infrastructure.\u201dnnThe GenAI review sprint falls into the third level of effort around adopting tools and capabilities. Isaacson said the use of data analytics and other emerging tools and capabilities need to lead to the J6 receiving better and more timely insights as part of its goal to achieve global information dominance.nnFinally around rapid adoption, Isaacson said the DoD doesn\u2019t necessarily move fast enough to take advantage of new or emerging technologies.nn\u201cWe're doing it better than we used to, and we are continuing to endeavor to make it better as we partner with our industry partners,\u201d he said. \u201cWhen you think about, say 10 or 15 years ago, when we would deliver a capability, oftentimes there was a tremendous learning curve that went along with it. These days the extraordinary innovation in digital awareness that our service members have, we're able to deliver capability to them very quickly. But also, they become incredibly familiar with it right away and provide the feedback that I think is an important part. So having digital natives in our services give us a competitive advantage.\u201dnn nn "}};

BALTIMORE — The Defense Department’s Joint Chiefs of Staff is getting on the artificial intelligence bandwagon.

The J6 recently completed a review of potential generative AI use cases to improve internal processes and now are deciding the next steps toward implementation.

Lt. Gen. Todd Isaacson, the director for command, control, communications, computers and cyber and the chief information officer for the Joint Chiefs of Staff/J-6, said at the AFCEA TechNet Cyber conference that the 90-day sprint looked at commercial generative AI tools and large language models that are already available to improve internal processes, such as a task or contract award.

Lt. Gen. Todd Isaacson is the director for command, control, communications, computers and cyber and the chief information officer for the Joint Chiefs of Staff/J-6.

“The second purpose was to then determine how we could potentially organize the Joint Staff and determine whether or not we wanted to stand up a Joint Staff chief data and AI officer (CDAO), which we have not yet determined, but is informed by the work that was done,” Isaacson said in an interview with Federal News Network. “Then, how do we endure the kinds of use cases that we had already put tools and capabilities in place? The final piece is how do we train teammates that when they come on board to utilize generative AI tools that are available, and institutionalize those? That’s the big idea behind the sprint. Now we’re in the reflection phase and getting ready to report back to leadership to determine what were the next set of a series of steps.”

Isaacson said the review team started with four use cases, but ended up completing eight as interest and excitement over the effort grew. He said the team has another 10 waiting for review. He added when the call went out for volunteers for the task force, the response was overwhelming, showing how much interest there is to use the GenAI tools.

The use cases were focused on how the GenAI tools could improve how action officers conduct analysis of large volumes of information, particularly historical financial, personnel and logistics data.

“There’s a lot of information to cull through so can you leverage LLMs to cull down the important components to maintain situational awareness on big items like that?” he said. “Those were largely internal. We didn’t use it for an intelligence function. We did it largely for internal joint staff keeping the conveyor belt moving kind of processes.”

Is a Joint Staff CDAO in the works?

Several military agencies and services are starting to take a deeper look at GenAI. The Air Force, for example, in May opened up a GenAI platform to airmen and civilians to use. Called the Non-classified Internet Protocol Generative Pre-training Transformer (NIPRGPT), the service hopes it can help with tasks such as coding, correspondence and content summarization, all on the service’s unclassified networks.

At the same time, Army CIO Leo Garciga is developing a new policy around GenAI and LLMs with a focus on data protection and the creation of guardrails for the interaction between the government and industry.

Over the next 60 days, the J6 review team will create recommendations for the J-6 leaderships to decide next steps.

Isaacson said the recommendations also will include whether creating a CDAO would be beneficial for the organization. DoD stood up its CDAO in February 2022.

“We’re going to have a conversation about if we want to organize ourselves in a different way to include an organic CDAO and whether or not we would find that valuable,” he said. “Into the fall, we would then be able to gain momentum based upon the decisions that are made. Currently, we have a CIO, which I perform. We have a CDO, which belongs to the Joint Staff/J6. That’s our current organization, but we saw the opportunity of the GenAI Task Force to take ourselves to task to determine through learning and doing if we wanted to establish a Joint Staff CDAO or not. It might be, we can’t make the investment because we’re not getting any larger. It might be, ‘hey, we definitely want to invest in it and this is how we’re going to do it.’ But nothing has been predetermined in that regard.”

The GenAI effort is part of a broader digital transformation effort across the J6.

Isaacson said his office issued a digital transformation campaign plan that outlines four levels of effort.

  • People
  • Infrastructure
  • Tools and capabilities
  • Rapid adoption

“The people piece is how do we develop, maintain and attract a digitally-enabled workforce. This is something that we are supremely focused on, and we appreciate the insights that industry could help us to make our pursuits more attractive in terms of best practice,” he said. “The second is the infrastructure. We partner very, very closely with the Defense Information Systems Agency, Joint Force Headquarters, DoD Information Network (JTHQ-DoDIN) partners to set the theater, set the conditions and set the enterprise. We rely very heavily as a joint force on that enterprise, and the investments for this data-centric pivot is laying a burden on the enterprise and the infrastructure.”

The GenAI review sprint falls into the third level of effort around adopting tools and capabilities. Isaacson said the use of data analytics and other emerging tools and capabilities need to lead to the J6 receiving better and more timely insights as part of its goal to achieve global information dominance.

Finally around rapid adoption, Isaacson said the DoD doesn’t necessarily move fast enough to take advantage of new or emerging technologies.

“We’re doing it better than we used to, and we are continuing to endeavor to make it better as we partner with our industry partners,” he said. “When you think about, say 10 or 15 years ago, when we would deliver a capability, oftentimes there was a tremendous learning curve that went along with it. These days the extraordinary innovation in digital awareness that our service members have, we’re able to deliver capability to them very quickly. But also, they become incredibly familiar with it right away and provide the feedback that I think is an important part. So having digital natives in our services give us a competitive advantage.”

 

 

The post DoD’s Joint Staff GenAI sprint lays out 8 internal use cases first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/defense-main/2024/07/dods-joint-staff-genai-sprint-lays-out-8-internal-use-cases/feed/ 0
With new AI tools available, State Department encourages experimentation https://federalnewsnetwork.com/artificial-intelligence/2024/06/with-new-ai-tools-available-state-department-encourages-experimentation/ https://federalnewsnetwork.com/artificial-intelligence/2024/06/with-new-ai-tools-available-state-department-encourages-experimentation/#respond Fri, 28 Jun 2024 22:20:15 +0000 https://federalnewsnetwork.com/?p=5058420 State wants employees to try out new AI tools like State Chat and North Star, but also share their own use cases to help drive the agency's AI approach.

The post With new AI tools available, State Department encourages experimentation first appeared on Federal News Network.

]]>
The State Department is launching a new artificial intelligence hub and encouraging employees across the globe to experiment with AI technology in ways that help streamline their diplomatic work.

Secretary of State Antony Blinken announced “AI.State” as a “central hub for all things AI” for the department’s 80,000 employees.

“It offers formal and informal training, including videos that are up there to help folks get started,” Blinken said during an event at the State Department today. “It’s a home for all of our internal State Department AI tools, libraries of prompts and use cases. And I would just say, try it out. I’d encourage everyone to test it out, to try it out, to explore it, to try to learn from it. And also lend your own ideas and input because this is something that will continue to be iterative and a work in progress.”

The State Department last fall released an enterprise AI strategy. The strategy prioritizes an “AI-ready workforce.” The agency has also been exploring using generative AI to help employees plan their next career steps.

Blinken said a big motivation for the State Department’s use of AI is improving analysis, while also freeing up its employees to work on high-priority tasks.

“We can automate simple, routine tasks,” Blinken said. “We can summarize and translate research. Something that would take normally days, even weeks, can be done in a matter of seconds.”

Blinken and other State officials at the event today encouraged the workforce to not just experiment with AI, but share use cases to better inform the agency’s approach to the technology.

“If that particular solution isn’t shared, if it just stays with that one person, that one group and that one country or that one place, then you have this reinvention of the wheel that has to go on time and time again,” Blinken said. “Our ability to draw from the experience that all of our teams are going to have using, deploying, experimenting with AI all around the world, but then bringing it back and having these use cases – especially the ones that are producing really interesting new things –come to the top, but then be taken and shared across the enterprise.”

State’s AI ‘North Star’

Earlier this spring, the State Department rolled out a new AI tool called “North Star” that can analyze and summarize news stories in more than 200 countries and in over 100 languages. Matthew Graviss, the State Department’s chief data and AI officer, said the agency’s public diplomacy officers are already making use of the tool.

“The ability to summarize in the media space, and then use that time that you saved to call the reporter find out a little more context around why they wrote that article, maybe shape the next article,” Graviss said today. “It’s repurposing that time to the higher value asks that we want our experts in diplomacy doing.”

Elizabeth Allen, under secretary for public diplomacy and public affairs, estimated the media monitoring tool could save PD officers 180,000 hours over the next year. “We have a lot of opportunity in the communication space to use AI,” Allen said today.

But she added that State’s public affairs offices also need to ensure that people are ultimately reviewing any outputs from generative AI, particularly if it helps feed prepared remarks made by ambassadors.

“We always have to be making sure that we have human checks, particularly when it comes to public communications,” Allen said.

The State Department also recently released a chatbot, “State Chat.” Graviss said his team can analyze the prompts and tweak the tool accordingly.

Kelly Fletcher, State’s chief information officer, said the department’s cybersecurity specialists are also “red teaming” any new enterprise tools like State Chat.

“We do that with almost all of our platforms and systems,” Fletcher said. “In the case of the newest AI technology, we were testing it . . . we found some stuff. Honestly, these folks managed to do some really cool sneaky things. And they were able to see what some folks’ prompts were, they were able to see information they shouldn’t have been able to see, and we fixed it.”

She said training is mandatory to use any new AI tools. And State’s IT teams are also monitoring tools like State Chat for nefarious activity.

“We can see what prompts people are using, not just to inform how is this technology being used and how is innovation happening in the field, but also we can see if somebody’s up to no good,” Fletcher said. “Whether they’re a person who works at the State Department, or somebody who’s managed to get in and is pretending to be a person who works at the State Department.”

Meanwhile, Uzra Zeya, under secretary for civilian security, democracy and human rights, said her team launched an AI research assistant called “data collection management tool,” DCT, in February 2023. Zeya said the tool will reduce by one-third – 52,000 hours per year – the time her officers spend researching and fact-checking reports.

The DCT capability is now available through AI.State.

“I’m really proud of what we’ve been able to accomplish, and I think this is an example of technology supporting not supplanting our work,” Zeya said.

Blinken said he believes AI will be fully integrated into the State Department’s work within the next 10 years.

“Some of this entails experimentation, some of it entails risk,” Blinken said. “But if we’re not leaning in, we’re going to be left out and left behind.”

The post With new AI tools available, State Department encourages experimentation first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/artificial-intelligence/2024/06/with-new-ai-tools-available-state-department-encourages-experimentation/feed/ 0
FedRAMP finalizes ‘fast pass’ approval process for AI tools https://federalnewsnetwork.com/cloud-computing/2024/06/fedramp-finalizes-fast-pass-approval-process-for-ai-tools/ https://federalnewsnetwork.com/cloud-computing/2024/06/fedramp-finalizes-fast-pass-approval-process-for-ai-tools/#respond Thu, 27 Jun 2024 22:32:05 +0000 https://federalnewsnetwork.com/?p=5056631 The new emerging technology prioritization framework will help determine which generative AI tools need to be pushed to the front of the line for approval.

The post FedRAMP finalizes ‘fast pass’ approval process for AI tools first appeared on Federal News Network.

]]>
The FedRAMP cloud security program is opening up its doors to specific types of generative artificial intelligence capabilities for priority approvals starting Aug. 31.

Vendors can submit GenAI tools, specifically used for chat interfaces and code generation, and debugging tools that use large language models (LLMs), and prompt-based image generation as well as associated application programming interfaces (APIs) that provide these functions to receive expedited review as part of Federal Risk and Authorization Management Program’s (FedRAMP) new emerging technology prioritization framework. The program office released the final version today.

“FedRAMP will open submissions for prioritization requests twice a year. Requests for prioritization by cloud service providers (CSPs) are voluntary. FedRAMP holds prioritized cloud services to the same security standards as all other cloud services, and reviews them in the same way,” the program office stated in a blog post. “FedRAMP ensures prioritized cloud services are reviewed first in the authorization process. Requests will be evaluated against the qualifying and demand criteria to ensure prioritized technologies meet the goal of ensuring agencies have access to necessary emerging technologies. Initially, FedRAMP expects to prioritize up to 12 AI-based cloud services using this framework.”

FedRAMP PMO says it will announce initial prioritization determinations by Sept. 30.

The program management office said while its started first with AI tools and capabilities, the framework is technology agnostic. It features a governance and CSP evaluation process.

“The governance process defines how up to three capabilities will be prioritized for ‘skip the line’ access to FedRAMP at any given time, and the amount of cloud service offerings (CSOs) with a given capability that will be prioritized,” the framework stated. “The CSP evaluation process outlines how new cloud service providers will have their CSOs qualified to access an accelerated review. Existing cloud service providers must work with their authorizing official and will follow the significant change request (SCR) process to include new enterprise technology (ET) CSOs in their authorization.”

New forms for FedRAMP priority process

Along with the new framework, the PMO released two forms for agencies and vendors to fill out. Cloud service providers whose offerings meet the ET criteria, and can demonstrate agency demand, can apply for the initial round of prioritization by completing the Emerging Technology Cloud Service Offering Request Form for cloud service offerings and the Emerging Technology Demand Form by Aug. 31.

The General Services Administration, which manages the FedRAMP program, issued the draft emerging technology framework in March seeking industry and agency feedback.

FedRAMP PMO developed the framework as required under the November 2023 safe, secure and trustworthy AI executive order issued by President Joe Biden.

Ryan Palmer, a senior technical and strategic advisor for FedRAMP at GSA, told Federal News Network during the 2024 Cloud Exchange, that the program office received more than 200 comments.

“Some of the things that we heard were concerns around the limits that we had in the framework. We tried to adjust those and clarify that those are going to be flexible and really driven by agency’s needs, which could be more generative AI solutions getting prioritized after the initial batch.,” Palmer said. “Prioritization is not a blocker. So it’s not that other services are not going to get prioritized. It’s just that you we do want to prioritize within our review process certain capabilities. Another area we did get feedback is on the benchmarks. Collectively, people liked the benchmarks. But some of the concerns around the benchmarks were how are they relating to different agency use cases?”

Palmer said the program office is looking at ways where they can standardize the communication around what benchmarks are relevant to the use cases.

From those initial comments, the program office made four major changes to the framework and two to the prioritization list.

Source: FedRAMP blog post June 27, 2024.

The PMO says one significant change was how it will analyze whether a service qualifies as generative AI.

“We’ve transitioned away from measuring cloud services against quantitative benchmarks and leaderboards. Instead, cloud service providers now submit public links to industry-standard ‘model cards.’ Those model cards describe key features of how their underlying AI models operate,” the PMO said. “Given the rapid pace of AI development, relying on benchmarks likely would have required an impractical amount of ongoing changes to have them continue to stay relevant across diverse use cases. Instead, FedRAMP will use the information on model cards to validate whether the AI being used is the type of capability being advertised. The purpose of collecting this information is not to assess the performance of the AI capability, but about whether the capability being offered is the one intended for prioritization.”

The PMO says it will continually review its processes and update its list as new requirements emerge, both AI and otherwise.

“FedRAMP will update and maintain an evolving list of prioritized ETs at least annually with input from agencies and industry followed by approval from the FedRAMP Board,” the framework stated. “Technologies will be removed from prioritization either by decision of the board, or when the target number of CSOs with the desired capabilities are available within the marketplace.”

 

The post FedRAMP finalizes ‘fast pass’ approval process for AI tools first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/cloud-computing/2024/06/fedramp-finalizes-fast-pass-approval-process-for-ai-tools/feed/ 0
The true cost of AI: 6 factors government agencies should consider https://federalnewsnetwork.com/commentary/2024/06/the-true-cost-of-ai-6-factors-government-agencies-should-consider/ https://federalnewsnetwork.com/commentary/2024/06/the-true-cost-of-ai-6-factors-government-agencies-should-consider/#respond Tue, 25 Jun 2024 19:58:37 +0000 https://federalnewsnetwork.com/?p=5053083 The true cost of AI encompasses a range of factors beyond just the initial investment in hardware and software.

The post The true cost of AI: 6 factors government agencies should consider first appeared on Federal News Network.

]]>
The promise of artificial intelligence includes a wide range of expectations, both for technological capabilities and its impact on how we do business. However, government agencies should consider that the cost of AI can be multifaceted and extend beyond the immediate dollar value. AI will both increase costs and cut costs, so it’s important to consider an investment in AI from a holistic perspective.

Here are some key factors to consider:

  1. Hardware costs: Graphics processing units (GPUs) are fundamental to the advancement of artificial intelligence, serving as the backbone of AI innovation. The availability of these critical components is currently hampered by supply shortages, contributing to a significant increase in costs.
  2. Energy costs: Training complex AI models requires significant computational power, which in turn consumes a considerable amount of energy. The energy costs associated with running the underlying infrastructure to power this can be substantial.
  3. Multi-agent costs: Multi-agent generative AI frameworks are pivotal for advancing generative AI. They utilize underlying large language models (LLMs) more extensively, which results in increased computational demand and costs compared to using a single LLM such as GPT 4.
  4. Data acquisition and management: If you are fine tuning a generative foundation model, or creating your own models, high-quality data is crucial for training AI models and helping address things like “drift.” Acquiring and curating large datasets can be expensive, as can the ongoing costs associated with data storage, processing and management. The old adage, “junk-in-junk-out,” is a key consideration here.
  5. Personnel costs: Skilled personnel such as data scientists, machine learning engineers and AI researchers are essential for developing, integrating and maintaining AI systems right now. These professionals often command high salaries, which can be a significant ongoing expense. Experience is critical because training or fine tuning models can be extremely expensive, and mistakes requiring a re-do can add up quickly.
  6. Ethical and regulatory costs: Compliance with ethical guidelines and regulatory requirements can add additional costs to AI projects. This may include ensuring data privacy, addressing bias and fairness concerns, and complying with industry-specific regulations. The rules around this are still being laid out — only recently did the U.S. government provide guidance on AI safeguarding, and implementing these safeguards is going to cost a good deal of money, just like zero-trust and other initiatives.

The true cost of AI encompasses a range of factors beyond just the initial investment in hardware and software. GPU shortages and intense computation requirements can further inflate these costs, emphasizing the importance of careful planning and resource management in AI projects. However, if done correctly, the benefits that AI will bring will far outweigh the costs in the long run.

The current landscape of AI mirrors the early, tumultuous days of the Wild West — a period of exploration and untapped potential. As the U.S. government navigates this new frontier, the costs associated with AI projects are expected to be historically high, reminiscent of the early days of the internet or the space race. During these periods, initial investments were substantial as technologies were in their infancy, standards were non-existent, and the path forward was unclear. As policies, standards and best practices for AI are developed and refined, costs will likely normalize, but this initial phase of high expenditure is an essential step toward harnessing the transformative potential of AI for public good.

John Mark Suhy is chief technology officer of Greystones Group.

The post The true cost of AI: 6 factors government agencies should consider first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/commentary/2024/06/the-true-cost-of-ai-6-factors-government-agencies-should-consider/feed/ 0
DHS AI Corps hires an initial 10 experts https://federalnewsnetwork.com/artificial-intelligence/2024/06/dhs-ai-corps-hires-an-initial-10-experts/ https://federalnewsnetwork.com/artificial-intelligence/2024/06/dhs-ai-corps-hires-an-initial-10-experts/#respond Tue, 25 Jun 2024 16:37:39 +0000 https://federalnewsnetwork.com/?p=5052625 Homeland Security Secretary Alejandro Mayorkas says DHS has received more than 6,000 “expressions of interest” in joining the AI Corps.

The post DHS AI Corps hires an initial 10 experts first appeared on Federal News Network.

]]>
The Department of Homeland Security has hired an initial cohort of 10 artificial intelligence experts to join its new AI Corps.

DHS announced the 10 hires this week after first unveiling plans to set up the AI Corps this past February. Homeland Security Secretary Alejandro Mayorkas said the corps’ experts will help DHS pursue its ambitious AI agenda.

“We are leaning forward in our use of AI to advance our mission,” Mayorkas said in a recent interview. “We have a number of pilots. We are leading the federal government in harnessing AI to advance the work.”

Mayorkas said DHS has seen more than 6,000 “expressions of interest” in joining the corps.
“It’s been tremendous,” he said. “We see a greater thirst for public service in the tech sector.”

The 10 experts and some of their most recent experience are below:

  • Sadaf Asrar, former AI technology expert for the National Center for Education Statistics
  • Zach Fasnacht, former senior manager of product management at PricewaterhouseCoopers; former digital projects coordinator at the Library of Congress
  • Pramod Gadde, former machine learning lead and founder of several healthcare-related startups, including Confidante
  • Sean Harvey, former lead on YouTube’s Trust and Safety team
  • Jenny Kim, former principal product manager at McKinsey & Company; former DHS Digital Service and the Defense Digital Service
  • Babatunde Oguntade, former senior principal data scientist at CACI International
  • Christine Palmer, former chief technology officer of the U.S. Naval Observatory
  • Stephen Quirolgico, former computer scientist at the National Institute of Standards and Technology
  • Raquel Romano, former senior director of engineering at Fora; former engineering lead at U.S. Digital Service
  • Robin Rosenberger, former director of interagency IT, data, and analytics initiatives in the Defense Department’s Chief Digital and Artificial Intelligence Office

DHS plans to hire a total of 50 experts to its AI Corps this year. The department plans to employ a model similar to the U.S. Digital Service, where experts are “farmed out” across the department to help advance specific projects.

DHS Chief Information Officer and Chief AI Officer Eric Hysen has said the department will take an “aggressive approach” to recruiting AI experts.

“The new talent joining DHS will help empower our workforce to quickly leverage AI technology in their efforts to safeguard our nation,” Hysen said in a statement today. “The range of professional and academic experiences these new hires bring to the federal government, some for the first time, will go a long way in our efforts to modernize our services. The AI Corps will help transform the way people interact with the government.”

Boyce leading DHS AI Corps

DHS also recently announced former Office of Management and Budget official Michael Boyce to lead the AI Corps. In the release today, DHS said Boyce helped write the section on federal use of generative AI in President Joe Biden’s October 2023 AI executive order.

Boyce also previously served as chief of innovation and design in U.S. Citizenship and Immigration Service’s Refugee, Asylum and International Operations Directorate.

“I’m honored to join and lead this team alongside such talented individuals; the first of several additions to what will become largest and most dynamic civilian AI team in the federal government,” Boyce said in a statement. “AI is the most important technology of our time and it is going to change how we do our critical work to serve the American people. We have a big responsibility to develop and use AI in ways that take advantage of its potential while protecting privacy, civil rights, and civil liberties.”

DHS has named several discrete areas where officials think AI and machine learning could be applied, including in countering fentanyl networks, combating child sexual exploitation and abuse, and delivering immigration services.

The department’s AI roadmap lays out several pilot projects planned for 2024. USCIS, for instance, plans to use large language models to help train refugee, asylum and international operations officers. The technology will help train them on “how to conduct interviews with applicants for lawful immigration,” according to the roadmap.

The post DHS AI Corps hires an initial 10 experts first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/artificial-intelligence/2024/06/dhs-ai-corps-hires-an-initial-10-experts/feed/ 0
DoD study sees ‘big breakthrough’ with using AI for declassification https://federalnewsnetwork.com/artificial-intelligence/2024/06/dod-study-sees-big-breakthrough-with-using-ai-for-declassification/ https://federalnewsnetwork.com/artificial-intelligence/2024/06/dod-study-sees-big-breakthrough-with-using-ai-for-declassification/#respond Mon, 24 Jun 2024 22:49:47 +0000 https://federalnewsnetwork.com/?p=5051780 The DoD study comes as Congress presses the Biden administration for progress on efforts to streamline classification and declassification.

The post DoD study sees ‘big breakthrough’ with using AI for declassification first appeared on Federal News Network.

]]>
A Defense Department research project has seen success in using artificial intelligence and machine learning to manage and declassify records. The project leads say the approach could be used to help agencies manage an explosion in digital records.

The research study, “Modernizing Declassification with Digital Transformation” is sponsored by the Office of the Under Secretary of Defense for Intelligence and Security. It’s being carried out by the University of Maryland’s Applied Research Laboratory for Intelligence and Security (ARLIS), one of DoD’s University Affiliated Research Centers.

J.D. Smith, chief of the records and declassification division at DoD’s Washington Headquarters Services, said the research project validated a proof of concept that shows AI and machine learning models can use “contextual understanding” to perform records management and declassification functions.

“The big breakthrough here is the mapping of business rules to contextual understanding models,” Smith said during a June 24 Public Interest Declassification Board meeting.

Previously, machine learning models “weren’t quite there” to understand the context for different types of content, Smith said. He said it’s key for models to understand the distinctions between, for example, a Department of Agriculture document that describes a “kiloton of grain,” versus a DoD document that uses “kiloton” to describe the specific content of nuclear weapons.

“How do you break through that contextual decision making to a computer and train a computer or an algorithm on doing that,” Smith said. “And one of the big break breakthroughs that we discovered is you can actually do that now, with the algorithms that exist with natural language processing, named entity recognition, and other models, you can configure them to train on how to make a contextual decision making.”

Lawmakers want updates on declassification

The DoD project comes as Congress presses the Biden administration for progress on implementing the Sensible Classification Act of 2023. The legislation was signed into law as part of last year’s defense authorization bill.

In a June 18 letter to federal Chief Information Officer Clare Martorana, a bipartisan group of senators requested an update on efforts to develop a technology solution to support both classification and declassification.

“This opportunity to adapt our classification and declassification processes will greatly enhance the government’s ability to maintain accountability of our classified documents and records, streamline critical processes important to our national security, and work to reestablish trust and transparency between the United States government and the American people,” the lawmakers wrote.

Lawmakers are seeking answers to long-standing concerns about what one former official called a “tsunami of digitally created classified records.” The Biden administration has also kicked off a National Security Council-led process to reform the classification system.

‘Playbook’ for information review

Meanwhile, DoD’s declassification study will eventually result in a “playbook,” Smith said, for using technologies to support declassification and record management decisions in government. ARLIS is working on a “system architecture,” Smith said, as well as costs and other considerations.

The playbook will also turn into a request for proposals, he added, to help guide industry’s work with agencies on the supporting technologies.

DoD is looking to partner with agencies, including the Energy Department and the National Geospatial-Intelligence Agency, to further advance the project. Smith also said the DoD is looking to augment a State Department project that has used AI to declassify diplomatic cables.

DoD also plans to convene an interagency meeting this summer to discuss cross-government efforts and standardization.

“The principles that we’re going to explore here and show how we unlock technology to kind of navigate this, it’s applicable to any type of information review and release you’re doing,” Smith said. “Foreign disclosure, FOIA, security review . . . any type of information security review that you’re doing to clear anything, it follows these steps. And how do we map technology to each step to really make things efficient from a reviewer standpoint?”

The post DoD study sees ‘big breakthrough’ with using AI for declassification first appeared on Federal News Network.

]]>
https://federalnewsnetwork.com/artificial-intelligence/2024/06/dod-study-sees-big-breakthrough-with-using-ai-for-declassification/feed/ 0