GSA’s 10X Program

FLEXION | 2019-2020

GSA's 10x program, housed within the Technology Transformation Services (TTS), is an incremental investment program that supports innovative technology ideas from federal employees to improve the public's experience with government services. Using a phased approach, 10x provides funding and support to turn promising ideas into tangible, usable products or services for the public. The program operates like a private sector venture studio, making small bets on risky projects and only continuing funding for those that demonstrate success and deliver value. 

10x Program Website

Case Study Note

I authored 18 research papers designed to be clear and accessible for both technical and non-technical audiences.

While the topics center on government innovation, I encourage you to review them as demonstrations of my UX research and strategy process.

In particular, the Research Approach and Reflections sections highlight how I investigate problems, synthesize insights, and translate findings into actionable recommendations.

How it Works

10x addresses long-standing problems within the federal government by exploring innovative technology solutions.

Idea Generation

GSA's 10x program seeks ideas from federal employees across all levels of government on how to use technology to solve problems or improve services. 

Phased Funding

The program uses a phased approach to invest in projects, allowing for continuous evaluation. 

  • Phase 1 (Investigation): A small team assesses the potential opportunity and any major roadblocks. 

  • Phase 2 (Discovery): The team investigates the idea's potential, considering market fit, finances, and regulatory factors. 

  • Phase 3 (Development): A minimally viable product (MVP) is developed with an active agency customer. 

  • Phase 4 (Scale): Funding is increased to broaden usage and develop a plan for ongoing financing and maintenance. 

Venture Studio Model

10x functions like a venture studio, integrating private-sector practices like making small initial investments and continuously vetting projects. 

Discipline and Evaluation

Only projects demonstrating impact and delivering value continue to the next funding phase. 

My Role

Across multiple government innovation projects with 10x at GSA, I led UX research and design strategy to evaluate the feasibility of new digital services and policy-driven tools. My work centered on framing ambiguous problems, conducting user-centered investigations, and translating research into actionable recommendations for federal stakeholders.

My role often began with clarifying who the users were—from federal employees and contractors to citizens, students, and retailers—and what problems they were actually facing. I applied a range of methods including stakeholder interviews, user interviews, policy/technical reviews, and comparative analyses of existing tools and processes. I synthesized these findings into clear insights about pain points, gaps in usability, and cultural or organizational barriers that shaped adoption.

As a UX researcher, I was responsible for evaluating not just the technology, but the human context around it—whether that meant uncovering why agencies were reluctant to publish open source code, identifying risks in data-sharing proposals, or highlighting where digitization offered no clear user benefit. I consistently grounded my work in evidence-based recommendations that helped decision makers avoid costly missteps and focus investment where impact would be highest.

From projects such as the Technology Services Catalog, Automating Code Assignments, Government Email Services, and Central App for Internships, I delivered research frameworks that clarified usability challenges, adoption risks, and opportunities for improving trust, discoverability, and efficiency. In some cases, my insights supported moving forward with a discovery phase (e.g., Applicant Tracking Systems, Automating Code Assignments). In others, my research demonstrated why an idea was not feasible or user-driven in its current form (e.g., Surfacing User Research, Open Source Code Awareness).

In all cases, my contribution was to ensure the voice of users and the realities of adoption were front and center in government innovation decisions. By doing so, I helped 10x make strategic go/no-go funding calls that balanced technical feasibility, policy constraints, and human experience.

Scam Reporting Follow Up

🏛 Context & Purpose

The USAGov Sandbox team observed that many people who called the USAGov contact center to report scams stopped short of filing an official complaint with the appropriate agency. I was tasked with investigating whether a follow-up process—or automation—could reduce drop-off, improve scam reporting completion, and ultimately provide agencies with better data to combat scams.

🔍 Research Approach

  • Reviewed prior work by the Sandbox team and FTC scam reporting processes.

  • Listened to live calls at the USAGov contact center to understand the citizen experience.

  • Analyzed consumer protection reports (FTC’s Consumer Sentinel Data, Do Not Call Registry).

  • Explored online communities (Facebook groups, scam awareness forums) to capture real-world frustrations.

  • Interviewed USAGov staff (program managers, accessibility leads, contact center specialists) to identify technical, staffing, and policy barriers.

💡 Findings

  • Drop-off rate unclear: No concrete data exists comparing USAGov call referrals to agency complaints, making the scale of the issue uncertain.

  • Limited citizen incentive: Scam reporting rarely benefits individuals directly, which reduces motivation to follow through.

  • Technical constraints: Automating transfers or call routing across multiple agencies would require costly restructuring of phone systems and contracts.

  • Alternative solutions exist: The USAGov scam-reporting chatbot and FTC’s Consumer Sentinel Network already provide meaningful ways to raise awareness and capture data.

👩🏾‍💻 Impact & Recommendation

I concluded that creating a follow-up or automated reporting process would be resource-intensive with little proven impact. Instead, I recommended:

  • Collecting better data on scam-reporting drop-offs.

  • Continuing to build awareness tools like the chatbot.

  • Strengthening partnerships with FTC and agencies to improve transparency and streamline reporting.

⚠️ Risks

  • Lack of Data: No reliable metrics exist on drop-off rates between USAGov calls and agency reports, making it impossible to confirm the severity of the problem.

  • Technical Barriers: Automating call transfers or reporting across multiple agencies is infeasible due to incompatible phone systems, contracts, and costs.

  • PII & Data Sharing Challenges: Agencies often require sensitive information to process scam reports, creating legal and technical hurdles for sharing across systems.

  • Low Public Value: Individuals rarely see direct benefits from reporting scams (e.g., no refund or immediate relief), reducing motivation to complete the process.

  • Operational Costs: Creating a dedicated scam contact center or enabling warm transfers would require significant investment with uncertain return on impact.

📝 Reflection

This project highlighted the importance of grounding problem statements in evidence before committing resources. As a UX researcher, I helped uncover that the perceived problem (citizen drop-off) wasn’t validated by data—and that more impactful improvements were already underway in parallel initiatives.

USWDS Privacy One-Stop Shop

🏛 Context & Purpose

I investigated whether centralizing privacy templates (PIAs/PTAs) and completed documents would improve efficiency for government employees and build public trust in how privacy is handled. The proposal assumed employees struggled to complete PIAs and that separating templates from completed documents contributed to public distrust.

🔍 Research Approach

  • Reviewed published templates and completed documents across agencies (e.g., DHS).

  • Conducted interviews with GSA privacy staff.

  • Assessed public trust concerns and connection to privacy practices.

💡 Findings

  • GSA employees already had access: Centralization was not a pain point internally.

  • Public distrust came from broader issues: Transparency and clarity of rights—not where documents were stored—caused concern.

  • Higher-impact opportunities exist: Efforts like SORNs and the DevOps for Privacy Offices project better addressed trust issues.

👩🏾‍💻 Impact & Recommendation

I recommended not moving forward with this project due to lack of evidence that centralization solved the root problem. Instead, I suggested exploring process improvements to PIAs and increasing public transparency through related initiatives.

⚠️ Risks

  • Weak Premise: The project’s foundation lacked strong evidence—centralizing templates and completed PIAs/PTAs was not a proven problem for government employees.

  • Misaligned Focus: Effort risked shifting away from storage issues toward broader, more complex aspects of the PIA process, creating scope creep.

  • Limited Public Impact: Centralization would not address the root causes of public distrust, which stem from lack of transparency and clarity on rights, not document location.

  • Resource Misallocation: Investing in this project could divert time and funding away from higher-impact privacy initiatives, such as improving SORNs or PII handling practices.

📝 Reflection

This project taught me the importance of validating assumptions before investing in a solution. As a researcher, I helped shift focus from a surface-level fix to more impactful improvements tied to trust and transparency.

Government Notification Services (GNS)

🏛 Context & Purpose

The project explored whether TTS should create a government-wide notification platform to communicate with the public. The goal was to improve transparency, reduce uncertainty, and save resources by sending proactive updates (e.g., application status, appointment reminders).

🔍 Research Approach

  • Reviewed the prior Notifications as a Service (NaaS) project.

  • Studied international examples (Notify UK, Notify Canada, Notify Australia).

  • Conducted 8+ interviews with GSA staff, external agencies, and product teams running notification platforms.

💡 Findings

  • Public trust opportunity: Notifications increase reassurance, reduce anxiety, and improve transparency.

  • International success stories: UK Notify and others proved feasibility, cost recovery, and adoption.

  • Agency friction lowered: A centralized service would reduce procurement overhead and technical barriers.

  • Risks: Required strong adoption, sustainable cost model, and security safeguards.

👩🏾‍💻 Impact & Recommendation

I recommended moving forward with the project. TTS was now positioned to support it, and lessons from international platforms offered a strong foundation. Next steps included validating demand across agencies and developing a cost recovery plan.

⚠️ Risks

  • Financial & Staffing Uncertainty: The earlier NaaS project ended partly because TTS lacked resources; without clear cost recovery and staffing models, GNS could face the same fate.

  • Adoption Risks: Agencies may not adopt a centralized service if they lack demand, face budget constraints, or prefer existing/private-sector solutions.

  • Security Concerns: Sending government notifications from multiple agencies introduces risks of phishing if there is no consistent format, branding, or authentication standard.

  • Sensitive Data Challenges: Some notifications may require personal or sensitive information, potentially making certain agencies ineligible to use a centralized system.

  • Public Confusion: If not carefully implemented, inconsistent rollout or unclear communication could reduce trust instead of building it.

📝 Reflection

This project reinforced the value of studying analogous models to avoid reinventing the wheel. My role clarified that notifications are not just a technical service, but a user experience of trust between government and citizens.

User Centered Data Specifications

🏛 Context & Purpose

The project explored whether the government could standardize the process for developing data specifications to make data collection and sharing more consistent across agencies. The proposal assumed that a repeatable, user-centered process could improve interoperability and public accessibility of government data.

🔍 Research Approach

  • Reviewed ongoing federal efforts like the Data Federation Project and NIEM (National Information Exchange Model).

  • Conducted interviews with the Department of Transportation (DOT) Work Zone Data Exchange (WZDx) project team.

  • Analyzed risks and challenges associated with integrating standardized processes across agencies.

💡 Findings

  • Usability gaps: Data specs are often created without input from those who collect or use the data.

  • Existing models: Starting from frameworks like NIEM is more efficient than building from scratch.

  • Diversity of data: The wide variation in government data needs makes a single standardized process unrealistic.

  • Risks: Integration difficulties, costly change management, resource shortages, adoption barriers, and lack of authority compared to accredited standards organizations.

👩🏾‍💻 Impact & Recommendation

I recommended not moving forward with creating a universal process. Instead, agencies should:

  • Leverage existing standards (e.g., NIEM).

  • Extend or tailor models to fit their needs.

  • Ensure new specifications are informed by real user input and contexts.

⚠️ Risks

  • Integration Challenges: Government data is diverse and specialized; one standardized process is unlikely to serve all use cases.

  • Change Management Issues: Specs may require major revisions after release, creating upgrade and backward compatibility challenges.

  • Adoption Barriers: Standards are notoriously difficult to enforce across agencies; even well-established models like FedRAMP face long adoption cycles.

  • Resource Constraints: Agencies may lack the expertise or funding to staff the roles (engineers, UX, data scientists) needed to meet a new standard.

  • Authority & Acceptance: Standards from unrecognized sources risk low credibility compared to those from accredited standards organizations (ASOs).

  • Limited Public Impact: Efficiency of production affects providers more than consumers; public benefit depends more on data quality than on standardization of process.

📝 Reflection

This project demonstrated that sometimes the best design decision is restraint. My research showed that forcing a universal solution would create more problems than it solved, and that focusing on adapting proven models with user input is a more pragmatic path forward.

Automatic Transcription

🏛 Context & Purpose

This project explored whether automatic transcription services could replace professional note-takers for government research, interviews, and meetings. The pitch proposed that leveraging existing transcription software or APIs might reduce the burden on agencies and improve efficiency.

🔍 Research Approach

  • Reached out to potential government users of transcription services.

  • Reviewed pilots and past research, including 18F’s test of GoTranscript.

  • Interviewed agencies like the Refuge Asylum Office (RAO) within DHS.

  • Conducted a market scan of commercial solutions (Google, Amazon, Microsoft, Nuance, Trint, Otter, Simon Says).

💡 Findings

  • Low demand: Many government teams had already explored transcription tools and were not optimistic about replacing human note-takers.

  • Accuracy gaps: Human transcriptionists average ~4% error, while software averaged ~12% under ideal conditions, with higher rates (50–60%) in complex cases like asylum interviews.

  • Persistent challenges: Difficulty differentiating speakers, handling accents, recognizing technical language, and filtering background noise.

  • Accessibility concerns: Hearing-impaired users acknowledged potential support value but noted these tools couldn’t replace interpreters.

  • Security risks: Potential data storage and privacy issues if sensitive conversations were processed on unapproved systems.

👩🏾‍💻 Impact & Recommendation

The team recommended not moving forward. While transcription technology has advanced, services are not mature enough to meet government accuracy, security, and usability needs. Existing workflows with human note-takers remain more reliable.

⚠️ Risks

  • Accuracy Limitations: Current services have high word error rates (WER), especially with multiple speakers, accents, technical language, or background noise.

  • Low Adoption Potential: Agencies and practitioners showed little interest due to poor past experiences and lack of confidence in replacing human note-takers.

  • Dependence on Human Review: Automated outputs still require full human verification, reducing efficiency gains.

  • Data Security & Privacy: Risks around storing sensitive recordings on unapproved systems; potential misuse without explicit consent.

  • Accessibility Concerns: While useful as a support tool, transcription services are not reliable enough to replace human interpreters for accessibility needs.

📝 Reflection

Automatic transcription has promise as AI evolves, but timing is critical. Future evaluations should revisit when error rates, multi-speaker support, and data protections improve. For now, the value does not outweigh the risks or limitations.

Data Fixer

🏛 Context & Purpose

This project explored whether machine learning could extend or complement the existing ReVal tool (developed by the U.S. Data Federation) to not only validate data but also propose fixes for formatting errors (e.g., zip codes, Social Security numbers). The intent was to reduce the heavy time burden agencies face cleaning data, improve efficiency, and provide cleaner datasets for analysis and decision-making.

🔍 Research Approach

  • Reviewed the role of ReVal, which surfaces data errors but does not resolve them.

  • Conducted interviews with practitioners across federal agencies, including DOT, SBA, Census Bureau, and GSA.

  • Analyzed how agencies currently handle large-scale data cleaning, including use of tools like OpenRefine and expensive third-party services.

  • Researched machine learning approaches to data cleaning and evaluated potential technical requirements.

💡 Findings

  • High demand across agencies: Practitioners spend significant time manually correcting errors or returning files to data owners. Census Bureau and DOT both reported large datasets where error correction is time-consuming and delays workflows.

  • ReVal gap: While effective at flagging errors, ReVal users need a system that also recommends or executes fixes.

  • Common needs: Clean geocoding data, consistent ingestion across multiple file formats (CSV, XML, JSON), and scalable solutions as data volumes grow.

  • Current reliance on manual or third-party tools: Agencies like SBA have had to dedicate extensive time (e.g., 1.5M rows of lending data) to manual cleaning or learning complex tools.

  • Market validation: Gartner estimates 80% of data scientists’ time is spent cleaning and preparing data; machine learning could significantly reduce this burden.

👩🏾‍💻 Impact & Recommendation

The team recommended moving forward to Phase 2. A Data Fixer tool has strong potential to:

  • Automate fixes for common formatting errors.

  • Propose resolutions for more complex anomalies.

  • Save time, reduce costs, and increase the reliability of government datasets.

Key opportunities include building a prototype leveraging machine learning, expanding ingestion formats, and prioritizing high-value use cases like geocoding.

⚠️ Risks

  • Training Risks: The model could be retrained incorrectly if not carefully managed, leading to inaccurate or inconsistent fixes.

  • Data Set Quality: Success depends on access to clean, high-quality datasets from agencies to train and validate algorithms.

  • Complexity of Errors: Some errors may require human verification, limiting full automation and efficiency gains.

  • Format Limitations: Current ReVal only supports JSON; expansion to other formats (CSV, XML, etc.) introduces technical challenges.

  • Adoption Barriers: Agencies may hesitate to rely on automated fixes without trust in accuracy and governance.

📝 Reflection

Clean data is foundational for analytics and decision-making. Data Fixer represents a chance to scale efficiency across government by addressing one of the most persistent pain points for data practitioners. By evolving ReVal or developing a new tool, agencies could dramatically reduce wasted effort on manual cleanup while improving accuracy and trust in shared data.

Technology Services Catalog

🏛 Context & Purpose

This project investigated whether GSA should develop a centralized, searchable catalog of GSA and TTS technology services to make it easier for staff and agency customers to discover and use available services. The catalog was envisioned with taxonomy, categorization, advanced search and filtering, and delegated ownership for content updates.

The goal was to address ongoing challenges where multiple service catalogs existed across GSA, often inconsistent, outdated, or unknown to potential users—making discovery and adoption difficult.

🔍 Research Approach

  • Reviewed past and current attempts at building service catalogs within GSA (GEAR, GSA Advantage, TTS portfolio GitHub, TTS Handbook, Software & Systems Inventory).

  • Benchmarked against Digital.gov’s Directory of Services, Tools and Teams, considered a stronger model with structured criteria and early positive adoption.

  • Conducted 12 interviews with GSA leaders, TTS staff, Presidential Innovation Fellows, and agency partners to understand expectations and challenges.

💡 Findings

  • Need confirmed: IT practitioners agreed a centralized catalog would be useful to discover services and POCs.

  • Barriers persistent: Existing catalogs suffer from fragmented ownership, lack of governance, and reliance on voluntary updates. Accuracy and trust are undermined without clear accountability.

  • Governance gap: A successful catalog requires a dedicated full-time Product Owner and a governance structure to enforce updates and ensure consistent participation. Without this, catalogs risk becoming abandoned or inaccurate.

  • Adoption challenge: Past catalogs failed due to unclear value propositions, lack of dedicated resources, and competing IT priorities.

  • External examples: NASA and others attempted similar catalogs but faced the same issues of inconsistent upkeep and limited utility.

👩🏾‍💻 Impact & Recommendation

The team concluded that while a Technology Services Catalog is valuable in concept, the project should not move forward at this time. Key risks include:

  1. Ownership risk – without a dedicated Product Owner and governance, the catalog will quickly become outdated.

  2. Data accuracy risk – voluntary updates lead to unreliable information.

  3. Adoption risk – without strong early adoption and stakeholder engagement, the catalog won’t be trusted or used.

⚠️ Risks

  • Lack of Product Ownership: Without a full-time product owner and governance board, the catalog risks becoming outdated and unused.

  • Data Inaccuracy: Reliance on service teams to self-maintain listings could result in incomplete, inconsistent, or inaccurate information.

  • Adoption Challenges: If programs and customers don’t contribute or use the catalog, its value as a trusted resource will be undermined.

📝 Reflection

Centralizing service discovery is important for efficiency and awareness, but success depends on governance and ownership rather than technology alone. The project highlighted the organizational and cultural hurdles—not technical feasibility—as the real blockers. Future efforts should prioritize leadership commitment, governance structures, and ongoing resourcing before investing in new catalog development.

Finding Form-ester

🏛 Context & Purpose

This project explored how to make government forms more easily discoverable for the public. Forms are one of the primary ways people access benefits and communicate with the government, yet they are often difficult to find—even when people know the agency they need.

The investigation focused on common user scenarios, ranging from knowing a form’s exact name/number to not knowing a form exists at all. The intent was to prioritize discoverability for the first three scenarios (users with some information) while treating form awareness (users with no knowledge) as a secondary exploration.

🔍 Research Approach

  • Desk research: Reviewed prior 10x projects like Indexing ICRs, which estimated ~9,300 public-facing forms and tied form discoverability to customer experience goals.

  • Policy review: Examined the 21st Century IDEA Act requirements for website modernization, digitization of forms, and standardization across agencies.

  • Comparative analysis: Studied how agencies currently organize forms online and ran Google keyword searches (e.g., “food stamps” vs. “SNAP”) to mimic real user behavior.

  • Call center analysis: Observed USA.gov call center sessions to see how often users struggle with finding correct forms.

  • Expert interviews: Spoke with 9+ subject matter experts across GSA, 18F, USDS, DoD, and other agencies to understand current practices, challenges, and opportunities.

💡 Findings

  • Validated problem: Finding forms is consistently difficult across agencies, with inconsistent organization and lack of standardization.

  • Trust factor: Improved form discoverability has potential to strengthen public trust, showing that government values citizens’ time and needs.

  • Multiple entry points: Solutions may involve better agency websites, improved search engine optimization, and/or a centralized forms database.

  • User need for validation: Even when people find forms, they often lack confidence they’ve located the correct one—suggesting validation mechanisms are important.

  • Organizational barriers: Past efforts like forms.gov failed due to lack of governance, inconsistent processes across agencies, and limited adoption.

👩🏾‍💻 Impact & Recommendation

The team recommended moving forward to Phase Two with a focus on:

  1. Prioritizing discoverability for users who know the agency or partial form details.

  2. Conducting deeper research into metadata, cataloging, and user-centered search pathways.

  3. Exploring why past repositories failed and identifying scalable, sustainable approaches.

A successful solution will likely require metadata tagging, standardization, and user-centered design artifacts (e.g., journey maps) to ensure consistency across agencies. Staffing should include both a metadata/cataloging specialist and a UX practitioner.

⚠️ Risks

  • Agency Autonomy: Each agency has its own form managers and processes, making standardization difficult.

  • Past Failures: Previous repositories like forms.gov were discontinued due to governance and adoption issues.

  • Process & Policy Barriers: Changing how agencies create, upload, and manage forms requires significant policy shifts and buy-in from forms managers.

  • Scalability: The sheer volume of forms (~9,300) requires careful prioritization and phased rollout.

  • Validation Needs: Even when people find a form, they may not know if it’s the correct one; solving for “findability” without validation risks incomplete solutions.

📝 Reflection

Form discoverability is more than a technical issue—it’s a customer experience challenge. People search for forms to accomplish critical life tasks, and government inefficiencies can undermine trust. By focusing on user needs, metadata consistency, and agency collaboration, this project set the stage for creating a more seamless and trustworthy experience in accessing government services.

Government Email Service

🏛 Context & Purpose

Email marketing is a primary way government agencies communicate with the public, but most rely on costly third-party providers (like HubSpot or govDelivery) that store large amounts of user data with limited oversight. This project explored whether GSA/TTS should create a government-focused, user-centered email marketing service with lightweight, USWDS-based templates to reduce costs, improve usability, and strengthen trust.

🔍 Research Approach

  • Literature and prior work review on email marketing tools and FedRAMP requirements

  • 12 SME interviews across GSA, Digital.gov, USA.gov, Login.gov, and 18F

  • Comparative analysis of commercial platforms (HubSpot, govDelivery, Constant Contact, Eloqua, Mailchimp, etc.)

  • Review of technical and security constraints (FedRAMP compliance, spam filtering, domain allow-listing)

💡 Findings

  • HubSpot was the “fan favorite.” Agencies found it intuitive, flexible, and user-friendly for creating templates without coding.

  • HubSpot is no longer an option. Its refusal to undergo FedRAMP means agencies must transition to govDelivery, which is less user-friendly and slower at delivering template customization.

  • Switching platforms is painful. Frequent migrations create inefficiencies as agencies retrain, rebuild templates, and reconfigure processes.

  • Spam filtering is a systemic barrier. Even government emails often end up in junk folders, and new platforms face high risks of bounce/spam issues unless integrated with established CPaaS providers.

  • Agencies don’t need “lighter” templates. Most were satisfied with current template experiences, but emphasized the need for a stable, compliant, and feature-rich platform.

  • Marketing automation is valued. Beyond email, agencies want integrated tools for CRM, analytics, social media, and customer support.

👩🏾‍💻 Impact & Recommendation

The team recommended continuing to Phase Two, but not by building a new lightweight template system. Instead, research should determine:

  1. If govDelivery can meet agency needs despite usability gaps.

  2. Whether another commercial platform could be pushed through FedRAMP.

  3. If neither is viable, whether TTS should develop its own shared service tailored to government needs.

A long-term solution must balance cost savings, usability, and trust, potentially combining lessons from the Government Notification Services (GNS) project with new marketing-specific capabilities.

⚠️ Risks

  • Adoption Barriers: Agencies have existing contracts and preferences (Microsoft, Google), making migration politically and logistically difficult.

  • Operational Risk: Email is mission-critical; disruptions during rollout or outages in a new system could severely impact government operations.

  • Cost & Resources: Building and maintaining a secure, government-run alternative would require significant long-term investment in infrastructure and staffing.

  • Security & Compliance: Matching or exceeding the security capabilities of commercial providers is challenging; failure to do so could erode trust.

  • Redundancy of Effort: Existing commercial solutions already meet most agency needs, risking duplication rather than true innovation.

📝 Reflection

My role clarified the gap between agency needs and the proposed “lightweight” template approach. Evidence showed the real problem isn’t template usability but ensuring government access to a compliant, trusted, and sustainable email marketing platform. This research highlighted the importance of framing problems around actual user pain points—not assumptions—and showed how UX-driven insights can redirect strategy toward practical, impactful solutions.

Open Source Photo Sharing Library

🏛 Context & Purpose

The pitch proposed creating an open-source image hosting platform that agencies could adopt and maintain collaboratively, based on NASA’s internal photo-sharing tool. The goal was to reduce costs, improve agency-to-agency collaboration, and enable better public engagement with government-generated imagery.

Our research found that while agencies did not see value in adopting a shared open-source hosting platform, they strongly identified a related challenge: the discoverability of free-to-use government digital content. Agencies want the public to easily find and reuse their imagery, but fragmentation across hosting platforms makes this difficult.

🔍 Research Approach

  • Market research on commercial subscription services (Flickr, Unsplash, Getty, DAMs) and open-source tools.

  • 6 SME interviews across agencies (USDA, GSA, USPTO, NC Dept. of Natural & Cultural Resources, USGS, LOC).

  • Consultation with the Social Media Community of Practice (SM-COP).

  • Cost analysis of hosting solutions and stock image licensing.

  • Desk review of Creative Commons and other lightweight discoverability tools.

💡 Findings

  • Fragmented ecosystem: Agencies use different platforms (Flickr, Unsplash, internal DAMs), making it difficult for them—and the public—to discover and share government imagery.

  • Discoverability gap: Even when .gov images exist, they often don’t surface in Google search results. Smaller agencies in particular struggle to compete with larger sources for visibility.

  • Budget constraints: Smaller agencies cannot afford subscriptions, DAMs, or stock imagery, which creates inequities in their ability to manage and share digital content.

  • Hosting distrust: Agencies expressed concern about relying on commercial platforms long-term due to sustainability, business risk, and terms of service issues.

  • Existing solutions show promise: Lightweight tools like the Creative Commons photo search extension could provide a low-cost, scalable approach to improving discoverability without requiring a single government-owned hosting platform.

👩🏾‍💻 Impact & Recommendation

We recommended continuing to Phase Two, but with a revised scope. Instead of pursuing an open-source hosting platform, the project should:

  • Explore lightweight, low-cost solutions (e.g., adapting or contributing to the Creative Commons photo search browser extension) to improve discoverability of existing free-to-use government content.

  • Focus on helping agencies contribute metadata and content into shared search APIs, improving visibility for both citizens and other agencies.

  • Prioritize usability for educators, researchers, and the public who need authoritative, reusable government imagery.

⚠️ Risks

  • Adoption barriers: Small agencies may lack staff/resources to integrate new tools into workflows, even if open source.

  • Open-source sustainability: No guarantee of long-term developer interest to update and maintain a government-tailored extension.

  • Hidden costs: “Free” tools still require labor for integration, metadata tagging, training, and security compliance.

  • IT security hurdles: Agency policies may block browser extensions or external APIs, limiting availability of solutions like a Creative Commons extension.

  • Incomplete coverage: Some government imagery will remain inaccessible due to classification, size, or archival constraints.

📝 Reflection

My research clarified that the real issue was discoverability, not hosting. Agencies don’t need another platform; they need lightweight, user-centered ways to make their existing content searchable and reusable. By reframing the problem, we redirected the effort toward a solution that respects agency constraints, supports public engagement, and builds on existing open-source ecosystems. This reinforced the importance of looking beyond the literal pitch to uncover the underlying problem worth solving.

Open Source Code Awareness

🏛 Context & Purpose

The Federal Source Code Policy (FSCP) requires agencies to update acquisition language to capture new custom code and encourage code reuse, making IT procurement faster, more effective, and less expensive. However, contracting officers and program staff may be unaware of FSCP or how repositories like code.gov can facilitate reuse. This project explored whether raising awareness of FSCP and influencing procurement practices could increase adoption of open source software (OSS) and contributions across government.

🔍 Research Approach

  • Desk research: Reviewed prior 10x projects, including Leveraging Open-Source Infrastructure, and the Federal Source Code Study.

  • SME interviews: Conducted interviews with federal staff and experts across:

    • Social Security Administration, Veterans Administration, GSA, HUD, SBA, USDA, CFPB, FEC, Digital Services Coalition (Advocacy & Education), New America think tank

  • Comparative analysis: Examined existing repository practices (e.g., code.gov) and cultural factors around OSS adoption.

💡 Findings

  • Awareness not the problem: Agencies interviewed were already aware of FSCP, reducing the need for broad “awareness campaigns.”

  • Cultural barriers dominate: Negative attitudes about OSS (e.g., misconceptions about security, scrutiny, and risk) remain a larger barrier than lack of knowledge.

  • Procurement inertia: Changing contracting practices is slow, bureaucratic, and often deprioritized by overburdened contracting officers.

  • Uneven readiness: Some agencies (e.g., NASA, ARL) actively publish open source code, but their highly specialized outputs are not broadly reusable. Legacy systems and resource gaps limit others.

  • Repositories insufficient alone: Code.gov and similar efforts help agencies already engaged in OSS, but do not address adoption barriers for those resistant or under-resourced.

  • Policy insufficient alone: Mandates without cultural change, leadership buy-in, and measurable success metrics have limited impact.

  • Vendor resistance: Contractors often benefit from proprietary approaches, vendor lock-in, and resistance to transparency—further slowing OSS adoption.

👩🏾‍💻 Impact & Recommendation

The team concluded that a 10x project would not meaningfully shift culture, procurement policy, or OSS adoption at this time. While open source remains strategically valuable, challenges span culture, policy, and vendor ecosystems that exceed the scope of a lightweight 10x effort.

Recommendation: Do not move forward. Instead, continue encouraging cross-agency knowledge sharing and transparency efforts. Future opportunities may arise once other guidance-focused projects (e.g., Digital Experience Guide) provide models for cultural change initiatives.

⚠️ Risks

  • Cultural resistance: Entrenched negative perceptions of OSS (security fears, scrutiny concerns, ownership/IP worries).

  • Procurement barriers: Changing acquisition practices is slow, inconsistent, and deprioritized amid heavy workloads.

  • Resource disparities: Wide variation in technical expertise, funding, and leadership across agencies prevents standardized adoption.

  • Repository limitations: Code.gov alone cannot drive adoption without additional services, support, and governance.

  • Vendor lock-in: Contractors often resist OSS due to profit motives, perpetuating dependency and opaque practices.

  • Measurement difficulty: Tracking compliance with FSCP (e.g., 20% reuse metric) remains burdensome for large agencies with multiple sub-organizations.

📝 Reflection

My role focused on clarifying whether “awareness” was the real issue. Through interviews and policy review, I surfaced that the true challenges are cultural and structural, not informational. This project reinforced the importance of digging past surface-level assumptions to uncover systemic barriers. It also highlighted how OSS, while valuable for transparency and efficiency, requires leadership buy-in, cultural change, and procurement reform—areas that a small discovery project cannot solve alone.

Automating Code Assignments

🏛 Context & Purpose

Government agencies frequently collect narrative text data through forms, claims, and reports. A burdensome task is manually assigning standardized codes (e.g., disability codes, injury codes, export product codes) to these records. This process is slow, costly, and prone to error. The project explored whether machine learning (ML) and natural language processing (NLP) could generalize existing agency efforts into reusable solutions, saving time and money while improving accuracy.

🔍 Research Approach

  • Desk research: Reviewed literature on AI, ML, NLP, and autocoding in government data contexts.

  • Comparative analysis: Assessed government-built and third-party ML tools (e.g., Census Bureau’s Sasha platform, Amazon Comprehend, IBM Watson).

  • SME interviews: Conducted 10 interviews with data scientists, statisticians, project managers, and technologists across:

    • U.S. Census Bureau

    • National Institute for Occupational Safety and Health (NIOSH)

    • Department of Veterans Affairs (VA)

    • Department of Energy (DOE)

    • Bureau of Labor Statistics (BLS)

    • GSA (AI Center of Excellence)

💡 Findings

  • High demand across agencies: Many agencies struggle with unstructured text data and see strong value in automating code assignments.

  • Significant efficiency gains: Manual coding takes ~15 minutes per record; automation could save hundreds of millions of labor hours annually.

  • Accuracy improvements: ML-based classification reduces inconsistencies and human error in applying codes.

  • Custom approaches required: Sensitive datasets with PII prevent a one-size-fits-all shared platform. Each agency needs tailored models and secure infrastructure.

  • Infrastructure matters: Agencies need scalable, secure environments (e.g., cloud-based processing) rather than local machines.

  • Training & adoption challenges: Non-technical staff need accessible tools and training to use ML-based solutions. Use cases and guidance from trusted government sources increase adoption.

  • Early adopters self-taught: Many SMEs who implemented autocoding learned ML and Python independently, signaling a need for structured support and training.

👩🏾‍💻 Impact & Recommendation

I recommend moving forward with Phase 2. The project should:

  • Develop a use case and implementation guide for automating code assignments.

  • Contribute content and training materials to the AI Sharing Platform to help agencies adopt ML-based classification.

  • Document successes and failures across agencies to build credibility and accelerate uptake.

By enabling agencies to apply autocoding, the government can improve accuracy, free up skilled staff for higher-value tasks, and enhance service delivery by reducing processing delays.

⚠️ Risks

  • Collaboration limits: Agency SMEs may have limited bandwidth to contribute documentation or case studies.

  • Adoption barriers: ML/NLP can appear overly complex, discouraging non-technical staff from adoption without strong training and use cases.

  • Security concerns: Handling sensitive PII requires agency-specific infrastructure; a centralized solution could introduce risk.

  • Bias in algorithms: ML models may inherit biases from training data; ongoing monitoring and retraining are necessary.

  • Resource constraints: Agencies need sufficient data volume, technical expertise, and infrastructure to successfully implement autocoding.

📝 Reflection

My research clarified that manual coding is a universal pain point across agencies, but each requires a tailored approach to automation due to data sensitivity and infrastructure differences. The biggest opportunity lies in building credible, government-sourced guidance and training, making ML adoption accessible to both technical and non-technical staff. This project highlighted how AI can drive efficiency in government but must be paired with human-centered rollout strategies to succeed.

Surfacing User Research

🏛 Context & Purpose

This project explored the viability of creating a government-wide user research repository that would make discovery reports accessible across agencies and to the public. The hypothesis was that sharing research would help agencies better understand user needs and reduce duplicate research efforts.

🔍 Research Approach

  • Literature & industry review: Studied how non-government organizations share and manage research insights (e.g., Dovetail, Airtable, Confluence, GitHub).

  • SME interviews: Conducted 9 interviews with program managers, designers, and researchers from DOT, USDS, PIF, U.S. Tax Court, CFPB, Kessel Run, and 18F.

  • Comparative analysis: Looked at informal vs. formal approaches to organizing research within agencies.

💡 Findings

  • Low demand for a repository: Teams did not view a lack of shared research as a blocker. Many rely on informal sharing (e.g., meetings, internal chats) rather than formal reports.

  • Time and bandwidth constraints: Teams lacked capacity to package research into standardized, shareable formats without a mandate or additional resources.

  • Privacy and sensitivity concerns: Many research projects involve PII or sensitive data, requiring anonymization before sharing. This added burden reduced willingness to contribute.

  • Limited perceived benefit: Some teams focus on highly specialized users or methods, making them less likely to reuse insights from other teams.

  • Cultural hesitation: Teams were reluctant to share “failures” or imperfect studies, raising concerns about quality control in a repository.

  • Public sharing not a priority: Only a minority (18F and PIF) saw value in making research public to build organizational credibility; most preferred keeping insights internal.

👩🏾‍💻 Impact & Recommendation

Recommendation: Do not move forward at this time. Instead of building a formal research repository, TTS should:

  • Promote existing research resources and communities of practice.

  • Provide guidance on research methods, participant recruitment, and tool use.

  • Explore future pilots by connecting overlapping teams to test knowledge-sharing before mandating contributions.

The repository idea may be revisited later when teams have more mature practices and clearer overlaps in their research needs.

⚠️ Risks

  • Low participation: Without strong incentives or mandates, teams are unlikely to contribute consistently.

  • Privacy and legal barriers: Sensitive research data requires anonymization, which reduces impact and adds workload.

  • Quality control issues: Risk of poorly conducted research being shared and reused, potentially lowering overall research quality.

  • Resource constraints: Teams already stretched thin would deprioritize contributing to a repository.

  • Cultural resistance: Hesitation to share failures or sensitive findings undermines repository robustness.

📝 Reflection

My research highlighted the difference between theoretical value and practical adoption. While sharing insights could reduce duplication, cultural, privacy, and bandwidth barriers made the idea unworkable. More immediate value lies in helping teams improve their research practices and connect to existing communities. This project reinforced the importance of grounding solutions not just in potential benefits but in realistic assessments of team capacity and willingness to participate.

Verifying Cross-Sector Transactions

🏛 Context & Purpose

This project explored whether the government could simplify how citizens share and verify personal data across public and private sectors (e.g., applying for loans, benefits, or jobs). The goal was to reduce paper-based, costly, and time-consuming processes while giving individuals more control over their own data.

🔍 Research Approach

  • Desk research into existing government data-sharing practices and prior initiatives (e.g., MyUSA, now login.gov).

  • Policy and regulatory review of privacy and data-use laws such as the Fair Credit Reporting Act.

  • SME interviews with staff from Login.gov, GSA’s Data Protection team, Identity PMO, and Vote.gov.

  • Comparative analysis of past data-sharing initiatives and their successes/failures.

💡 Findings

  • Technology already exists: Mechanisms for cross-agency data sharing (e.g., DHS checking HHS records for child support during passport applications) demonstrate feasibility.

  • Privacy and regulatory constraints: The primary barrier is not technical but legal and procedural, with strict rules governing how data can be used.

  • Revenue dependency: Some agencies resist data-sharing initiatives because reducing public cost would reduce their internal revenue streams.

  • Login.gov overlap: Current and planned capabilities of Login.gov and the Identity Validation Shared Service already address many goals of this project.

  • Eligibility complexity: Verifying eligibility (e.g., SBA loans, Medicaid) requires additional data that is not easily standardized or reusable across contexts.

  • Security concerns: Centralizing citizen data increases its attractiveness as a target for cyberattacks.

👩🏾‍💻 Impact & Recommendation

Recommendation: Do not move forward. While inefficiencies in data sharing remain a pain point, current government initiatives (Login.gov, Identity Validation Shared Service) already address key aspects of the problem. Additional investment in this pitch risks duplicating ongoing work. Future improvements should focus on strengthening privacy protections, eligibility-specific solutions, and incremental enhancements to existing systems.

⚠️ Risks

  • Privacy and legal risks: Expanding cross-sector data use could conflict with existing regulations and raise public trust concerns.

  • Security vulnerabilities: Centralized datasets increase breach risk.

  • Duplication of effort: Work overlaps with ongoing initiatives like Login.gov.

  • Resistance from agencies: Potential revenue loss and retraining needs create barriers to adoption.

  • Eligibility complexity: Reusable data is limited, and eligibility checks often require unique datasets.

📝 Reflection

This project highlighted how policy, governance, and trust—not technology—are the main barriers to cross-sector data sharing. My role helped surface the realities of duplication risk and cultural resistance within agencies. While the intent to streamline citizen experiences is strong, the lesson here was that timing and alignment with existing initiatives matter as much as the idea itself.

Applicant Tracking System

🏛 Context & Purpose

Federal talent acquisition is burdened by regulatory complexity, inefficient processes, and insufficient tooling. Recruiters often rely on spreadsheets, emails, and manual processes to manage applicants—leading to errors, duplication, and poor candidate experiences. This project aimed to investigate whether a modern Applicant Tracking System (ATS) could improve recruiting efficiency, candidate transparency, and overall hiring outcomes across government agencies.

🔍 Research Approach

  • Desk research on commercial ATS capabilities and legacy government tools (e.g., HRConnect).

  • SME interviews with recruiters, analysts, and talent acquisition leaders across GSA, CFPB, 18F, and private sector vendors.

  • Comparative analysis of workflows between manual spreadsheet-based hiring and ATS-enabled recruiting.

💡 Findings

  • Recruiter inefficiency: Staff spend 2–8 hours weekly on manual data entry; pipelining can take ~97 hours per candidate.

  • Data issues: Errors and inconsistencies stem from spreadsheets and disconnected tools, creating garbage-in/garbage-out problems.

  • Candidate visibility gap: Applicants cannot track their status, forcing recruiters into time-consuming email exchanges.

  • Reputation risk: Poor hiring processes harm government competitiveness for top talent compared to the private sector.

  • ATS benefits: Even a basic system could centralize candidate tracking, reduce errors, pre-qualify applicants, automate job board postings, and enable better diversity targeting.

  • Market maturity: Robust ATS products already exist; there is little reason for TTS to build a custom solution.

👩🏾‍💻 Impact & Recommendation

Recommendation: Move forward to Phase 2. Rather than building a new platform, TTS should evaluate existing ATS solutions, assess FedRAMP-readiness, and support agencies in adoption. A commercial ATS could provide:

  • Centralized, searchable candidate databases.

  • Improved pipelining and diversity targeting.

  • Candidate self-service dashboards.

  • Analytics for refining job postings and identifying hiring bottlenecks.

Phase 2 should identify the most suitable product, evaluate technical implications, and begin the FedRAMP approval process.

⚠️ Risks

  • FedRAMP approval: Many ATS vendors lack FedRAMP certification; securing one may be lengthy.

  • Adoption challenges: Recruiters may resist transitioning from spreadsheets or legacy systems.

  • Training needs: Systems may require significant onboarding for both recruiters and hiring managers.

  • Integration gaps: Compatibility with federal HR systems (e.g., USAJobs, HRConnect) may be limited.

  • Vendor dependency: Long-term reliance on a commercial provider could pose cost or flexibility risks.

📝 Reflection

This project reinforced how process inefficiencies have real human consequences—not only slowing recruiters but discouraging qualified applicants. My research surfaced the importance of usability in enterprise systems and highlighted the need for federal hiring to match the private sector in candidate experience. The key takeaway: the government doesn’t need to reinvent the wheel but must invest in scalable, modern tools to attract and retain talent.

Improving Recalls Data Quality and Delivery

🏛 Context & Purpose

The Consumer Product Safety Commission (CPSC) estimates that consumer product injuries, deaths, and property damage cost the U.S. over one trillion dollars annually. This project investigated whether improving the quality, consistency, and speed of federal recalls data could help retailers more effectively remove dangerous products from the marketplace and improve consumer safety.

🔍 Research Approach

  • Desk research into prior recall projects and artifacts, including a 2018 recalls API initiative.

  • SME interviews with FDA, NHTSA, and members of the prior recalls project team.

  • Retailer interviews with Amazon, Costco, and Walmart to understand how they receive and use recall data.

  • Policy review of recall regulations and agency procedures.

💡 Findings

  • Fragmentation across agencies: Multiple agencies oversee recalls (CPSC, FDA, USDA, EPA, NHTSA, USCG), each with unique processes, data standards, and oversight structures.

  • Retailer adaptations: Large retailers have adapted to existing recall processes and data delivery; many already rely on suppliers and distributors for direct notifications. Costco reported better data from suppliers than from agencies.

  • Unified API not valued: Retailers did not see value in a government-wide recalls API; they only want recall data relevant to products they sell.

  • Data quality issues: Retailers reported gaps in detail (e.g., SKUs, manufacturing dates, instructions) and delays in receiving complete recall information, which slowed their internal response processes.

  • Consumer response is the bigger problem: Low product return rates (e.g., only ~50 of 175,000 recalled BuckyBalls returned in 2013) highlight that consumer behavior, not data quality, is a primary barrier to effective recalls.

  • Ongoing initiatives: Agencies such as CPSC and FDA are already pursuing internal improvements (CPSC Strategic Plan 2018–2022; FDA’s “New Era of Smarter Food Safety”).

👩🏾‍💻 Impact & Recommendation

Recommendation: Do not move forward. The recall process involves too many agencies with entrenched practices to standardize effectively. A unified solution risks duplication of existing efforts and lacks retailer demand. Instead, each agency should continue internal reviews of their recall processes, focusing on:

  • Cleaning and structuring data fields to reduce errors.

  • Providing machine-readable formats.

  • Minimizing free text fields and increasing standardized categories.

Agencies should also prioritize consumer-side improvements, increasing return/disposal rates and ensuring recall communications are actionable and trusted.

⚠️ Risks

  • Governance complexity: No single entity oversees all recall data, making standardization unlikely.

  • Policy & legal barriers: Privacy, regulatory, and contractual rules limit how and when data can be shared.

  • Retailer disinterest: Without strong retailer buy-in, investments in new recall infrastructure risk irrelevance.

  • Resource duplication: Efforts may overlap with ongoing agency-led initiatives (CPSC, FDA).

  • Consumer impact gap: Even with better data, recall success depends on public response, which remains low.

📝 Reflection

This project reinforced the importance of scoping research to where government can realistically influence change. My research clarified that technical fixes (like APIs) cannot solve systemic, multi-agency governance and consumer behavior challenges. The experience underscored the value of questioning assumptions, validating stakeholder demand, and aligning government projects with areas of true impact.

Digitizing Spill Response Guidance

🏛 Context & Purpose

NOAA is mandated to provide scientific and technical support during coastal oil and chemical spills. Traditionally, laminated paper job aids have guided spill responders and planners. This project investigated whether a new digitized format (beyond PDFs) would improve usability in the field.

🔍 Research Approach

  • Desk research into NOAA’s job aid history, formats, and current availability.

  • Usage analysis of job aid download and distribution statistics (2015–2020).

  • SME interviews with NOAA, EPA, and state spill response officials (Scientific Support Coordinators, emergency response chiefs, managers).

  • Exploration of fieldwork practices, including aerial spotting, training contexts, and device management challenges.

💡 Findings

  • No significant pain points with PDFs: SMEs reported current PDF-based job aids are adequate for fieldwork and training. No strong demand for a mobile app emerged.

  • Mobile devices not practical in the field: Visibility challenges (screen glare, brightness, color accuracy) and environmental factors (sun, wind, precipitation) make digital use difficult. Printed versions remain more reliable for interpreting images and graphics.

  • Declining demand: NOAA download statistics show a consistent decrease in job aid use since 2015.

  • Limited adoption of mobile tools: NOAA and Coast Guard mobile apps have had little traction due to cultural resistance and device management barriers.

  • Training use case already covered: Job aids are effective in classroom-based training when printed or viewed on desktop monitors.

  • Geospatial solutions dominate incident response: EPA and NOAA use digital mapping tools in command centers, reducing the need for a separate job aid application.

  • Future potential in data collection: If spill or inspection data collection moves digital, tying job aids contextually to forms may create new value—but demand is currently absent.

👩🏾‍💻 Impact & Recommendation

Recommendation: Do not move forward. The investigation showed that current PDFs are sufficient and better suited to field conditions than new digital formats. Instead of developing a mobile-friendly tool, NOAA should:

  • Continue providing downloadable, printable PDFs.

  • Periodically update content and issue supplemental materials.

  • Monitor field and training practices for signs of changing needs (e.g., more dynamic roles or increased digital data collection).

  • Track industry improvements in guides (e.g., NCCOS coral identification guides) for inspiration.

⚠️ Risks

  • Low adoption: Mobile applications risk limited use due to cultural resistance and field conditions.

  • Tool sprawl: Introducing new digital job aid apps could duplicate existing geospatial solutions.

  • Misalignment with field usability: Brightness, color accuracy, and environmental barriers make mobile devices unreliable for spill response imagery.

  • Resource misallocation: Investment in digitization may not deliver meaningful improvements given declining demand.

📝 Reflection

This project highlighted the importance of validating demand before digitizing legacy processes. My UX research uncovered that while digitization often appears modern and necessary, context and field conditions determine real value. The key takeaway was the importance of focusing limited resources on initiatives where technology can truly enhance safety, usability, and adoption—rather than duplicating existing, adequate tools.

Central App for Federal Government Internships

🏛 Context & Purpose

Federal agencies each run their own internship programs, resulting in a fragmented experience for student applicants and additional complexity for agencies. The pitch proposed creating a central application system for federal internships, aiming to improve the applicant experience and placement rates across agencies.

🔍 Research Approach

  • Desk research on existing government internship programs (e.g., State Department, GSA, OPM, Coding It Forward).

  • Stakeholder interviews with program managers, HR specialists, and recruiters across agencies, including U.S. Department of State, Census Bureau, GSA, OPM/USAJobs, and private recruiting partners.

  • Comparative review of centralized internship/job platforms (e.g., LinkedIn, ZipRecruiter, Open Opportunities).

💡 Findings

  • No strong demand for a central application process: Agencies and applicants generally felt current processes worked adequately, despite minor inconveniences like duplicate uploads and tracking multiple program deadlines.

  • Main challenge is discoverability, not uniformity: Agencies and students alike reported difficulty tracking internship opportunities across multiple programs and timeframes.

  • Quality concerns not tied to applications: Agencies were satisfied with candidate quality; constraints on placement were driven more by budget limitations than by application inefficiencies.

  • Opportunity in centralizing postings, not applications: A shared platform could surface opportunities in one place without requiring agencies to abandon their existing application processes.

  • Emerging alignment with USAJobs: The Engagement Manager for USAJobs expressed interest in extending Open Opportunities to internships, allowing students to browse internships centrally while still applying through individual programs.

👩🏾‍💻 Impact & Recommendation

Recommendation: Move forward by exploring collaboration with USAJobs. Instead of a central application, the project should:

  • Build on Open Opportunities to centralize internship postings across agencies.

  • Allow students to browse, filter, and discover a broader set of opportunities.

  • Support agencies with features like applicant presorting and candidate profile matching to increase efficiency.

  • Improve HR workflows by consolidating opportunity posting and applicant visibility without disrupting agency-specific review processes.

This approach preserves agency autonomy while enhancing applicant experience and talent matching.

⚠️ Risks

  • Adoption: Agencies may be hesitant to maintain postings in multiple places or shift recruiting workflows to a centralized hub.

  • Technical/security barriers: Some internships may have security or compliance restrictions that prevent posting on Open Opportunities.

  • Incomplete coverage: If only some agencies participate, students may still face fragmented experiences.

  • Workload concerns: Agencies may fear that managing centralized postings will add responsibilities without reducing existing burdens.

  • Budgetary fluctuations: Internship availability will still be shaped more by funding than by application systems.

📝 Reflection

This project reinforced the importance of identifying the true problem—in this case, fragmented discovery rather than fragmented applications. My UX research surfaced that while agencies don’t see value in a central application, they recognize the benefit of centralizing opportunities to widen applicant pools and improve visibility. The lesson here was that collaboration with existing systems (like USAJobs) is more sustainable than building new ones, and that focusing on discoverability and user experience can deliver greater value than forcing structural changes to entrenched processes.