Keeping Track of Fast-Tracks to New Careers
Last week, President Biden announced a historic new investment in the Pell Grant program. Signaling a response to both the impacts of COVID-19 on postsecondary education and the need to ensure learners gain access to academic programs and training that encourage economic mobility, the investment aligns with trends we’ve seen accelerated over the last several years. Even before the pandemic, displaced workers and mid-career changers were increasingly opting for faster — and cheaper — entry points into the workforce.
President Biden’s speech comes on the heels of a March announcement from Senators Tim Kaine (D-VA) and Rob Portman (R-OH) where they reintroduced the Jumpstart Our Businesses By Supporting Students, or JOBS Act, to make short-term (non-degree) programs eligible for federal Pell Grants.
Short-term programs help meet the needs of COVID-impacted workers. The programs are often closely aligned with the demands of local labor markets and can serve as a springboard for those interested in “stacking” credits toward longer degree programs, entering specialized occupations or pursuing education as a working adult.
But funding short-term programs won’t help unless we know how to screen them by real-world success rates.
Too often, short-term training programs fail to prove their worth or produce equitable outcomes. A recent report by New America paints a bleak picture, noting that more than half of adults with a short-term certificate or credential earn income below the national poverty line — and that short-term credentials can be particularly unhelpful for Black and Latinx students.
For short-term programs to work equitably and effectively, we need to track outcomes data and tie funding to successful program results.
Instead of simply opening the Pell floodgates to scores of potentially shoddy programs, the Senators’ bill and the Biden administration’s plan should set up a system of accountability for all education and training programs, from short-term programs to four-year degrees, that rewards workforce development and measures the impact on students’ lives and economic mobility.
The federal government has never had a good way to assess the impact of traditional degree programs on individual learners’ lives, let alone accelerated programs. Historically, regional accreditors, the organizations that accredit most colleges and universities, have tended to focus on qualitative measures of institutional quality such as governance, teaching technique, and building facilities. National accreditors, which accredit primarily career and technical schools, measure graduation, placement and licensure rates, but represent a relatively small slice of the postsecondary field, and their approaches have not taken hold in the wider higher education community. Neither type does much to track post-program earnings or equity.
The JOBS Act tries valiantly to cobble together quality assurance thresholds for Pell eligibility from existing systems. Under the provisions, programs need to provide training aligned with state and local employers as well as award recognized credentials, satisfy industry licensure requirements, and meet the reporting requirements under the Gainful Employment rule — a 2014 addendum to the Higher Education Act of 1965 that looks only at debt-owing students.
But none of these checks measure outcomes like technical competency gained in a program, whether it helps graduates get a job or whether they make more money or launch more quickly as a result of a program. And worse, a haphazard quality assurance system will be measured inconsistently across providers, making it difficult to compare program success rates.
To truly evaluate the impact of a program on a learner, leaders in postsecondary education must develop a new system that provides baseline definitions of student outcomes and a codified framework around data collection and reporting that enables consistent, comparable quality evaluation across all higher education programs. Using this system, data collected on program outcomes would reflect students’ prospects for real-world success and apply across programs, whether a short-term training course or a four-year degree. Prospective students, employers, and policymakers could then leverage that data to draw reliable comparisons between providers and make informed decisions about which programs to enroll in, hire from, and fund.
With all programs comparable at a glance, bad actors and lackluster providers would have a harder time hiding their programs’ poor ROIs. This in turn would reward providers that deliver the skills (technical and soft) that help students land in high-paying, productive employment.
The JOBS Act’s goal is to connect students to good programs that launch them into the workforce fast. In order to have any confidence we know what good is, leaders in government, industry, and postsecondary education must work together on a data collection and reporting system that raises accountability and champions student success. Only then will the bill be a good use of taxpayer dollars.
Kristin Sharp is the CEO of the Education Quality Outcomes Standards Board (EQOS), a non-profit focused on collecting and reporting results data in education and workforce programs. EQOS is currently piloting new outcomes measurement systems in CO, IN, and NJ. Kristin also spent nearly 10 years as senior staff for Senators Warner (D-VA), Pryor (D-AR), Klobuchar (D-MN) and Lugar (R-IN).