
The 526ez had been studied constantly, but always in fragments.
Teams tested features, validated designs, and ran usability sessions. Every quarter brought new findings about specific sections or proposed changes. But no one had ever just watched. Watched a Veteran sit down with their real information, their real medical records, their real trauma, and try to file a claim from start to finish.
Feature testing couldn't answer the most important questions: Where did Veterans lose trust? Where did the burden accumulate? Why did they give up? The product team needed a different perspective, and they needed findings that could shape how work on the 526ez was prioritized.
We designed a study that would let us observe Veterans completing the form with their personal information in real time. This approach, sometimes called shadow research or contextual inquiry, was new to the VA ecosystem. There were no prototypes, no staging environments, no mock data. Just Veterans filing actual claims while a researcher watched.
The sessions ran two hours: ten minutes for introduction, ninety minutes of observation, and ten minutes for reflection. We scheduled 19 sessions to reach 5 completed observations. Data collection took a month. The no-show rate was 42%, and many participants were blocked by authentication issues or simply weren't ready to file when they arrived.
The methodology required constant flexibility. Many Veterans joined expecting the researcher to help them file, not just observe. We clarified roles at the start of each session, but the expectation gap was real.
We invited Veterans to return for follow-up sessions if they couldn't finish, but no one opted in. These weren't failures; they were findings about what this kind of research asks of participants.
What made the study possible was a trauma-responsive approach. The 526ez asks Veterans to describe events that led to their conditions, including Military Sexual Trauma. We planned for mid-session breaks, especially during sensitive sections. Moderators had notetakers ready to step in. Debriefs were scheduled afterward so the research team could decompress. The work required holding two things at once: rigorous observation and genuine care for the people in the room.
Veterans showed us what feature testing never could.
Navigation didn't let them preview what came next or move between sections easily. "If I had this in paper, I'd read the whole questionnaire," one Veteran said. "Here I just have to have faith in the back button that it's not going to take me back too far." Another told us, "I trust the browser more than I trust the website."
When errors blocked progress, Veterans rarely called the Contact Center. "Nothing… recordings, leave a message, no response," one explained. They'd rather wait and try again later than deal with the VA over the phone.
The process of filing for PTSD was long and confusing, especially for experiences that didn't fit the form's structure. "If I got hit in the head, then it's very straightforward," one participant said. "But for something like sexual harassment, it's not one event. It was the environment." Another, a survivor of Military Sexual Trauma (MST), said quietly: "I'm sorry I don't have a more visible injury."
Two Veterans were blocked entirely because they didn't have medical records ready for upload. They couldn't skip ahead and come back. Both exited frustrated, facing the task of tracking which conditions they'd entered and matching them to documents they still needed to gather.
After submitting, most Veterans expected a confirmation email. When it didn't arrive, they took screenshots or saved the page as a PDF. They'd learned not to trust that the system would remember.
The findings gave the product team something new: a view of the whole. Not just where individual features failed, but where the experience broke down, where Veterans quietly adapted, and where they gave up.