Asking Faculty to Carry the Wrong Load

The previous article in this series made a simple argument: regulated programs have to be built backward. Regulatory requirements, funding alignment, and delivery structure have to come first. Curriculum follows. Build it the other way and the program pays for it later.

This article goes deeper into what "paying for it later" actually looks like — because it isn't one problem. It's several, and they compound.

The Default Move

When a college decides to launch a new technical program, the sequence is predictable.

Leadership greenlights the concept. A faculty member with relevant field experience gets assigned to build it out. And because curriculum development is how programs are implemented at most institutions, that's where work begins.

The faculty member does exactly what they're trained to do. They start writing courses.

This is where things quietly start to go wrong.

Curriculum is being used to define the program instead of support it. And at that stage, the foundational decisions haven't been made — not because anyone is being careless, but because nobody has been assigned to make them.

The faculty member fills the gap the best they can. They're a subject matter expert. They understand what students need to learn. But knowing the field and knowing how to architect a program within a regulated, funded, compliance-driven institutional environment are two different things. One is expertise in the subject. The other is expertise in the system.

Both matter. And when only one is present at the design stage, the cracks appear later.

The Approval Labyrinth

California has a rigorous curriculum approval process. That's genuinely a good thing. But the process assumes someone is actively shepherding a proposal through it — and at most institutions, that person doesn't clearly exist.

Most course proposals move through sequential approvals: department review, dean sign-off, curriculum committee, curriculum chair, and academic senate. Each step has its own timeline, each college has its own system, its own meeting schedule, and its own reviewers.

What most people outside the process don't know is that rejection at any stage can sometimes send the proposal back to the beginning. Not one step back. The beginning. And the system doesn't chase anyone.

A proposal sitting in a reviewer's inbox doesn't trigger an alert. It doesn't escalate. If the faculty member who submitted it isn't monitoring their email closely, the proposal can stall for weeks without anyone realizing it's stalled. Programs miss semester launch windows not because of complexity, but because of a missed email. If that missed email happens sometimes in April or May, it isn’t read or discovered until September due to summer break.

The institutions where this works smoothly aren't the ones with simpler proposals. They're the ones where someone knows the process well enough to get ahead of it — reaching out to each reviewer before a proposal arrives, flagging what reviewers tend to push back on, knowing which committee meets when, and what they need to see. Simple problems get resolved before they become rejections. The process moves.

That kind of navigation isn't instinct. It's pattern recognition built from repetition. And it's rarely part of anyone's formal job description. Usually, it falls to the dean to provide this oversight. But deans are simultaneously sitting on curriculum committees, hiring committees, and fielding whatever crisis landed in their inbox that morning. A new proposal that isn't actively being tracked by someone is easy to lose — not through negligence, but through the simple math of too many priorities and not enough hours

The Architecture Decision Nobody Flags

Before a single approval is submitted, there's a decision that shapes everything downstream — and it almost never gets the attention it deserves.

Should this program be structured as one large course, or as a series of smaller stackable units?

When faculty are making that call, there's a natural pull toward the larger course. And it isn't unreasonable. A single course means a single Course Outline Report to write, a single proposal to shepherd through approval, a single set of reviews to navigate. Fewer courses means less work at every stage of a process that is already demanding.

From a curriculum standpoint, the large course often makes sense too. It allows for cohesive instruction, logical sequencing, and a clean student pathway. The bias and the instructional logic point in the same direction — which is exactly why the question rarely gets examined.

But from a program architecture perspective, that decision has consequences that aren't visible at the design stage.

A large course can't be bridged across semesters at most institutions. It carries minimum hour requirements that constrain when and where it can be offered. It may not fit the scheduling realities of every student population the program intends to serve — populations that sometimes aren't even identified until after the program is built.

Stackable units solve different problems. They can be offered in sequence or independently. They fit more easily into non-traditional delivery contexts. They create intermediate credentials that carry standalone value. They open funding pathways that a single large course may not qualify for.

The right answer depends on factors that have nothing to do with curriculum quality — and everything to do with how the program needs to function in the world. Faculty aren't positioned to weigh those factors, not because of any shortcoming, but because that's not the information they're working with when the design decisions get made.

When the Intent Isn't Known Until Too Late

The consequences of that architecture decision became concrete in an aviation program I worked on.

The program had been built around a cohort model. The courses were well-designed, the content was solid, and the structure made sense for the student population it was originally designed to serve.

After seeing the success of the program, the college wanted to extend the program to local high schools as part of a CTE enrichment offering. The problem was immediate: the course was too large. High school students couldn't absorb the contact hours after school. The college couldn't bridge semesters in the high school context. The structure that worked cleanly for a college cohort simply didn't translate.

The intent to serve high school students had never been part of the original design conversation. Not because anyone was negligent, but because nobody had surfaced the question before the program was built.

The solution was to break the course into stackable pieces that could be offered at the high school in manageable segments while still functioning as a coherent sequence within the college cohort model. It worked. But it required reworking a program that had already been built, approved, and launched.

That rework — restructuring course architecture, revisiting approvals, realigning delivery — is expensive and disruptive in ways that a different conversation on the front end would have avoided entirely.

Two Kinds of Expertise

There's a version of this story where the faculty member or the curriculum developer gets blamed. That's the wrong read.

Curriculum experts are genuinely skilled at something hard: taking complex knowledge and designing instructional experiences that help students learn it. That's not a secondary concern. It's central to whether a program actually produces competent graduates.

But it's a different skill set than knowing how a CTE pathway might need to extend to a high school, or how course architecture affects funding eligibility, or how to move a proposal through a sequential approval process without losing a semester to an unanswered email.

Transfer is another dimension that gets overlooked for the same reason. Researching articulation opportunities with four-year universities isn't something that happens after the curriculum is written — it has to inform what gets written in the first place. Course content, unit structure, and learning outcomes may all need to align with a potential transfer partner before a single Course Outline Report gets drafted. That requires knowing which universities have relevant programs, understanding what their articulation requirements look like, and having the industry relationships to make those conversations happen. Faculty building a program from scratch rarely have all of that — not because they aren't capable, but because that kind of network and background takes years to develop in a specific context, and the program needed it on day one.

Both perspectives are necessary — the instructional and the structural. And they have to work together from the beginning, not in sequence, and not with one operating in isolation from the other.

What typically happens instead is that a single person — usually a faculty member — gets handed the whole task and is expected to figure out what they don't know as they go. The curriculum gets built. The structure problems surface later. By then, the program is invested, momentum is real, and rebuilding feels like failure rather than correction.

It isn't failure. It's the natural result of a design process that separated things that needed to be integrated.

The Compounding Effect

Each of these problems — structural misalignment, approval stalls, architectural inflexibility — is solvable on its own.

The difficulty is that they tend to arrive together, because they share the same root cause: the program was designed without someone whose job was to ask the structural questions before anything got built.

By the time the questions surface, they're no longer design questions. They're operational problems. And operational problems in regulated programs are harder to fix, more expensive to address, and more disruptive to the students and institutions trying to make the program work.

The programs that avoid this aren't the ones with more resources or better faculty. They're the ones where someone asked the right questions first.

None of this is inevitable

And none of it is a faculty problem. Faculty who get handed an undefined program and asked to build it are doing the best they can with the information and authority they have. The gap isn't in their effort or their expertise. It's in what they're being asked to carry — structural decisions that require a different kind of knowledge, external relationships that take years to build, and process navigation that requires pattern recognition most faculty have no reason to have developed.

The programs that launch cleanly and function the way they were designed share something in common: the structural questions got asked before the curriculum questions. Someone was thinking about approval timelines before proposals were written, about course architecture before content was developed, about transfer pathways and high school partnerships before the program was locked in. That freed faculty to do what they're actually good at — building courses that work, against a framework that was already solid.

Regulated programs fail or stall at launch for a lot of reasons. But those reasons tend to share the same origin — the structural work that had to happen first didn't happen at all, and the weight of it landed on people who were never meant to carry it.

Getting that sequence right doesn't require starting over or adding layers of bureaucracy. It requires asking the right questions at the right time, before the curriculum is written, before the proposals are filed, before the program is committed to a shape it may not be able to hold.

That's where the work begins. And that's where most programs either set themselves up to succeed — or quietly set themselves up to struggle or fail.

#CommunityColleges #WorkforcePrograms #TechnicalPrograms #AviationTraining #CTEPrograms #ProgramDevelopment

Next
Next

Why Regulated Programs Fail: Building Forward Instead of Backward