GovCon Bid and Proposal Insights
GovCon Bid and Proposal Insights
Automated AI-Enabled Help Desk Assessment Event-Department of the Army - Army Contracting Command
The Army Contracting Command is seeking solutions for an Automated AI-Enabled Help Desk Assessment Event under a MA-IDIQ.
Key Details
•Type: Request for Solutions
•Awards: One or more
•Deadline: Sep 26, 2025, 11:59 PM EDT
For insights on opportunities like this, listen to our latest podcast!
Contact ProposalHelper at sales@proposalhelper.com to find similar opportunities and help you build a realistic and winning pipeline.
Imagine you're a cyber defender, maybe deep in a high stakes training scenario, and bam, your system just crashes. You need help, like right now.
Speaker 2:Yeah, cam is critical.
Speaker 1:Exactly. But instead of getting that instant fix, you're looking at a slow manual help desk. You're stuck in a queue.
Speaker 2:That's got to be incredibly frustrating.
Speaker 1:Right, especially when you think about the stakes here national security, protecting critical cyber infrastructure. It's not just annoying, it's potentially mission critical, absolutely, wow, that very frustration. That's exactly what the US Army's Department of Defense, the DOD, is trying to fix Today. We're diving deep into their really ambitious plan to integrate advanced AI into its persistent cyber training environment. They call it PCTE. And look, this isn't just about speeding up help desk tickets. It seems like it's about fundamentally changing how the military's cyberspace workforce actually trains and operates. It's kind of ushering in a new era.
Speaker 2:It really is a significant shift.
Speaker 1:So, for this deep dive, our information comes straight from the source, an official Department of the Army, special notice. It's basically their public call for innovative ideas.
Speaker 2:Right For cyber innovation challenge number five.
Speaker 1:Exactly, and this document it's like a blueprint. It lays out their vision, their requirements for this AI powered help desk. It's pretty detailed.
Speaker 2:It gives you a clear picture of what they're after.
Speaker 1:So our mission today is to really unpack this thing. We want to explore how the DoD plans to use these, you know, cutting edge AI technologies. What problems are they actually trying to solve? And, maybe most interesting, what are the unique challenges of doing AI in this kind of super secure, locked down environment?
Speaker 2:Yeah, those constraints are key.
Speaker 1:By the end, you'll hopefully get a much clearer picture of what military tech innovation looks like right now, especially with AI.
Speaker 2:Okay.
Speaker 1:All right. So we've set the stage, the need is there, but before we get into the AI specifics, we need to understand the world it's going into. Let's dig into this persistent cyber training environment, the PCTE. What actually is it?
Speaker 2:Well, what's really fascinating here is just the sheer scale and purpose of PCTE. What actually is it? Well, what's really fascinating here is just the sheer scale and purpose of PCTE. It's designed to give the DoD cyberspace workforce and, importantly, allied partners too, a secure, dot-configurable real-time virtual environment. Okay, so the idea is they can train as they fight that's the phrase they use and they could do this across all classification levels, supporting the big priorities for US Cyber Command, USCC.
Speaker 1:So it's like a hyper-realistic, super-secure digital playground, almost for cyber warriors.
Speaker 2:That's a good way to put it Precisely and it's the distributed capability right. It helps standardize and simplify, automate the whole training lifecycle for these cyber mission force operators. The architecture itself is pretty interesting. There's a control plane, the CP, that handles the core stuff. Users see the training portal, the current help desk ticketing system.
Speaker 1:Right the user-facing part.
Speaker 2:Exactly, and then you have one or more event planes or EPs. That's where the dynamic ranges, the actual cyber training environments get hosted. Now here's a key detail the EP is specifically unaccredited.
Speaker 1:Unaccredited. What does that mean in this context?
Speaker 2:It means they have total flexibility. They can put vulnerable systems in there, even actual malicious software for realistic training.
Speaker 1:Wow Okay.
Speaker 2:But and this is crucial it's logically isolated. No outside internet access at all. That's to make absolutely sure none of those malicious bits can ever escape. It's a completely contained world.
Speaker 1:Got it Super flexible for training, but also super locked down. That's the balance they have to strike. So OK, if the system is that advanced, that secure, why the big push for an AI help desk? What's the specific pain point they're calling out?
Speaker 2:Yeah, that's a really good question. The document, the special notice, is very clear on this PCTE's current help desk system. It's manually intensive, it's limited by the number of people they have and their specific skill sets. And think about the volume, how bad is it. Get this. More than 54,000 support tickets have been submitted since the program started. They're using Atlassian Jira Service Management right now 54,000.
Speaker 1:They're using Atlassian Jira Service Management right now. Fifty-four thousand, wow, okay, that number really tells a story. I can just picture the backlog.
Speaker 2:Exactly that kind of volume, handled manually, means things take longer to fix, users get frustrated, training gets held up. Plus, imagine being a new user trying to find the right troubleshooting guide or documentation and all that it's difficult yeah, finding the needle in the haystack. Pretty much. They have tools like Confluence for knowledge management, mattermost for chat, but those aren't really built for automatically answering questions. So the need is clear. They have to evolve, they need to bring in these emerging technologies to help with tasks that humans are doing now, but with less direct intervention.
Speaker 1:Streamline things, cut down wait times, boost productivity.
Speaker 2:Exactly Free up the human experts for the really complex stuff.
Speaker 1:Okay. So that volume, that 54,000 ticket number, makes the why crystal clear. They need a smarter approach. This is where the AI and machine learning case really comes into play. Looking at the document, what's the core of their vision? How do they see AI changing the help desk game?
Speaker 2:Well, the main goal is to seriously upgrade end user support. They want an intelligent help desk, support chatbot, plus machine learning analytics working behind the scenes.
Speaker 1:Analytics for what specifically?
Speaker 2:To analyze trends in the help tickets, figure out priorities automatically. It's not just about answering questions. It's about understanding the bigger picture of what's going wrong and the overall objective Resolve issues at the lowest possible tier using the least amount of human staff time.
Speaker 1:Right, keep the simple stuff simple and fast. You mentioned tiers. That sounds important. Can you break those down for us? How does that work?
Speaker 2:Sure, it's a pretty standard but important multi-tiered support model. First up is tier zero, that's self-service Users solving common problems themselves using a knowledge base, faqs, the chatbot, guided tools, the AI is meant to make this really effective.
Speaker 1:So the AI tries to handle it first.
Speaker 2:Ideally yes. Then you've got tier one, basic service. Think password resets, simple onboarding questions, basic troubleshooting steps Still pretty routine, basic troubleshooting steps still pretty routine. Tier two is intermediate service. This is where it gets a bit more technical network issues, maybe some complex software problems, things that need more expertise than tier one. Then tier three, advanced service. Now you're talking specialist support, maybe even involving development teams, deep debugging for really tricky or unique issues.
Speaker 1:The heavy hitters.
Speaker 2:Right and finally, tier four is external service. That's when you have to go outside, maybe to a vendor, or escalate for issues that can't be solved internally or are specific to a third-party product.
Speaker 1:Makes sense. So the AI fits in primarily at those lower tiers.
Speaker 2:Initially, yes, but here's the innovative part the AI help desk is explicitly expected to learn over time as it sees more issues and solutions. It should be able to handle more complex requests, effectively, pushing more resolutions down to Tier 0 and to Tier 1.
Speaker 1:Ah, so it gets smarter and more capable.
Speaker 2:Exactly, but and this is also crucial it also needs to be able to forget.
Speaker 1:Forget. Why would it need to forget?
Speaker 2:Think about it. Cyber tools and procedures change fast. Troubleshooting steps for an old, deprecated system. That's not just unhelpful, it could be actively harmful if the AI suggests it. So it needs to prune obsolete information to stay accurate and relevant.
Speaker 1:That's a really interesting point, a learning and forgetting system, vital in cybersecurity.
Speaker 2:Absolutely essential for maintaining trust and effectiveness.
Speaker 1:So to build this adaptable learning forgetting system? What specific AI technologies are they actually calling for in the notice?
Speaker 2:They're looking for mature, established AI tech. They specifically mentioned natural language processing NLP. That's key for the AI to understand user requests written in normal conversational language.
Speaker 1:Not just keywords, but actual understanding.
Speaker 2:Right. That's key for the AI to understand user requests written in normal conversational language. Not just keywords, but actual understanding Right. They also list retrieval, augmented generation or RDA. That helps the AI pull accurate, relevant answers directly from their own knowledge bases, confluence documentation, et cetera, instead of just making things up.
Speaker 1:Reducing the risk of hallucinations, presumably.
Speaker 2:Precisely, and they mentioned agentic AI. This is really interesting. It suggests they want an AI that can not just answer questions but potentially take actions, diagnose problems, maybe even execute simple fixes autonomously, like a real digital assistant.
Speaker 1:Okay, that's stepping it up.
Speaker 2:Definitely and, of course, general machine learning for that continuous improvement loop and knowledge-based optimization, making sure the data the AI learns from is actually good, clean and up-to-date. The vision includes an AI assistant for users, yes, but also one for the human help desk staff, helping them find answers faster, plus AML for automatically tagging tickets, routing them correctly and even spotting gaps in the knowledge base itself.
Speaker 1:So it's helping on multiple fronts user self-service, staff assistance and system management.
Speaker 2:It's a comprehensive approach.
Speaker 1:Okay, that's the vision. Now let's get into the nuts and bolts for the companies hoping to build this. What are the absolute must-have requirements and what are those technical hurdles, especially in this unique environment?
Speaker 2:Well, the requirements are pretty demanding. They emphasize using existing mature AML solutions, but tailoring them specifically for PCTE. No science projects here. They obviously need that self-service AI chatbot or virtual assistant we talked about. And, critically, yenlp has to be sophisticated. It needs to handle conversational prompts, ask clarifying questions if needed and tailor the response to the user.
Speaker 1:Personalized help.
Speaker 2:Yes, and the escalation logic needs to be precise, knowing exactly when to pass a ticket up the chain to the right human team. They also stress accuracy and completeness. There need to be strong controls to prevent hallucinations, the AI making stuff up or giving irrelevant answers. That's non-negotiable.
Speaker 1:You absolutely cannot have an AI giving faulty technical advice in a military cyber training.
Speaker 2:Absolutely not. The system also needs to enrich Ticket automatically with relevant tags, provide that AI-powered self-service portal and have a solid management and monitoring dashboard so the humans can see what the AI is doing and they really hammer on the need for continuous machine learning, model retraining using new tickets, historical data, knowledge base updates, user feedback the whole loop.
Speaker 1:So it's constantly learning from everything data knowledge base, updates, user feedback, the whole loop, so it's constantly learning from everything.
Speaker 2:Right and, underlying all of this, stringent data security within a controlled, unclassified information or CUI compliant environment. Handling sensitive data requires extreme care.
Speaker 1:Okay, Preventing hallucinations, ensuring security those make perfect sense. But you mentioned constraints earlier that sounded particularly tough from an infrastructure view. What were those again?
Speaker 2:Yes, this is where it gets really challenging, especially for typical AI companies. First, the solution must operate entirely within a closed, restricted network.
Speaker 1:Meaning absolutely no connection to the commercial Internet or cloud services.
Speaker 2:None whatsoever, completely air-gapped. This is huge for AI, as many models rely on cloud resources for training or inference. It also has to meet really tough cybersecurity standards NIST guidelines, iso 27001, fedramp compliance levels. These are serious benchmarks.
Speaker 1:Top tier security.
Speaker 2:And maybe the biggest technical surprise. The document states there are no dedicated AML GPUs currently available in the PCPE infrastructure.
Speaker 1:Wait, no GPUs For a state-of-the-art AML project. How are they expecting this to work?
Speaker 2:Well, they do add the caveat that the program can adjust infrastructure as needed, but the starting point is assume no specialized AI hardware. This puts immense pressure on vendors to design highly efficient models, models that can run effectively on standard CPUs or perhaps very limited hardware resources. It forces real innovation in model optimization and edge computing.
Speaker 1:That is a major constraint. Wow, it completely changes the design approach compared to typical cloud-based AI development. You can't just throw more GPUs at the problem.
Speaker 2:Exactly. You have to be much smarter about the algorithms themselves. Oh, and they also needed to integrate smoothly with their existing tools, like JIRA for ticketing, and be able to ingest data from various existing sources without causing disruption.
Speaker 1:So a super smart, adaptable AI that learns and forgets, runs securely in an air gap network without GPUs and plays nice with existing software, that's quite a challenge.
Speaker 2:It really is. It demands cutting edge AI expertise combined with deep understanding of secure, resource constrained environments.
Speaker 1:OK, so given these high stakes and tough requirements, how is the Army actually planning to select a solution? What's the process and timeline for companies wanting to tackle this?
Speaker 2:It's a multi-phase competitive process. They're using what's called other transaction agreements, or OPAs, which are often used for rapid prototyping and innovation in defense. The first step was submitting white papers. That window opened back on August 18th 2025, and it closes pretty soon, september 26th 2025, at 11.59 pm Eastern time.
Speaker 1:Okay, so that deadline is coming up fast.
Speaker 2:Yes, and it's important to note, submissions are restricted to US citizens only and they have to go through a specific portal called Vulcan.
Speaker 1:Got it, so companies are likely scrambling to get those white papers in right now. What happens after that September 26th deadline?
Speaker 2:After the window closes, an assessment team reviews the white papers. That's scheduled from September 29th to October 14th 2025. Based on those reviews, they'll down select a group of companies. Those chosen will be invited to give virtual solution demonstrations.
Speaker 1:Ah, show and tell time.
Speaker 2:Pretty much. Those demos are planned for November 3rd through 6th 2025. After seeing the demos, the Army stakeholders will decide whether to award one or potentially multiple prototype projects. They want to move relatively quickly to the prototyping phase.
Speaker 1:And during those reviews and demos, what are the key things they're evaluating? What makes a proposal likely to get picked?
Speaker 2:They've laid out clear criteria, things like the overall quality of the submission, obviously Then operational relevancy. Does this solution actually solve the Army's real problem? In the PCTE context, that's huge.
Speaker 1:Does it fit the mission?
Speaker 2:Exactly. Also the technical approach. Is it sound, Innovative, but feasible? How will they handle development and integration? What's the plan for operations and maintenance? And, of course, schedule and price are factors too. Each area gets scored on a zero to five scale.
Speaker 1:And the winners? What do they have to deliver?
Speaker 2:They'll be expected to deliver working prototypes in increments, provide comprehensive documentation, including details on the AI algorithms themselves, give periodic demos to show progress, document all the security controls. They even need to provide cost estimates for software licensing down the road and spell out the data rights terms. It's a full package they're looking for, not just a cool piece of tech.
Speaker 1:Right. They need a sustainable, documented, secure solution, not just a flashy demo.
Speaker 2:That's the goal.
Speaker 1:Well, this has been a really revealing look into a fascinating project. We've gone from the very real frustrations of a slow help desk in a critical training environment.
Speaker 2:That 54,000-ticket problem.
Speaker 1:All the way to the cutting edge of AI, things like agentic systems and models that have to forget, deployed under some really challenging constraints, like that closed network and lack of GPUs.
Speaker 2:Yeah, it really highlights that intersection of military operational needs and advanced tech.
Speaker 1:It definitely does. It shows how they're trying to be innovative to solve a concrete problem.
Speaker 2:And if you step back and look at the bigger picture, this whole initiative really underscores how vital rapid tech integration is becoming for national security. It also makes you think about the unique challenges, but also the opportunities, when you deploy sophisticated AI like these continuously learning and forgetting models inside these highly sensitive isolated systems, especially without the usual cloud crutches or dedicated AI hardware.
Speaker 1:That lack of standard resources forces a different kind of innovation.
Speaker 2:It forces resilience and efficiency in the AI design itself. Maybe it's a blueprint for AI that has to work reliably at the edge in difficult conditions.
Speaker 1:So, thinking about that, what really stands out to you from this whole initiative? As AI gets more powerful, how might the lessons learned here, you know, balancing the innovation push with ironclad security, the human factor, all within these resource limits, how might that shape how we deploy AI in other critical areas?
Speaker 2:That's the big question, isn't it?
Speaker 1:Thinking beyond military training, maybe into critical infrastructure like power grids or finance or healthcare systems. Could this Army project offer clues for building trustworthy AI in those sensitive domains too?
Speaker 2:It certainly could. The focus on accuracy, preventing hallucinations, security in isolated environments, efficient models these are challenges that many critical sectors face or will face as they adopt more AI. The need to learn and forget that's probably relevant anywhere. Regulations or facts on the ground change quickly.
Speaker 1:So the constraints here might actually be driving solutions that have much broader applicability down the line. Something for us all to watch.
Speaker 2:Definitely something to watch.