Every IoT product team I've worked with in the last decade has eventually said some version of the same sentence. It usually arrives mid-meeting, when an architectural decision starts feeling expensive: "we'll fix it in firmware later." The phrase is meant to be reassuring. It almost never is. Over-the-air updates have become the rhetorical insurance policy for connectivity decisions that should have been made better the first time, and the gap between what firmware can actually do and what teams assume it can do is the source of more avoidable cost in this industry than any other single factor.

Call it the firmware fallacy. The assumption is that anything connected can be updated, anything updated can be reconfigured, and anything reconfigured is therefore a soft constraint rather than a hard one. The first half of that chain holds up well. The second half does not.

What firmware can and cannot reach

Modern over-the-air systems are remarkable. A team with reasonable competence can push a binary, validate the rollback path, and get a fleet of half a million devices onto a new application stack in a controlled rollout over the course of a quarter. None of that was true ten years ago, and the people building these systems deserve more credit than they get.

But the things firmware can reliably reach are application-layer concerns: the business logic, the protocol stack above the radio, the edge ML model, the local UI. The things firmware cannot reach, or can reach only with a degree of risk that effectively makes them off-limits, are the things that constrain connectivity strategy: the radio modem firmware (vendor-locked, certification- bound), the SIM technology (physical, then logically locked to a profile lifecycle), the eUICC posture (constrained by the SM-DP+ relationship that was set up at manufacturing time), and the certification regime itself (which doesn't care what your firmware can do).

The firmware fallacy is not that updates don't work. It's that the things firmware can update are precisely the things that don't tend to be the constraint. The constraints sit one layer below.

The five-year clock

Long-lived IoT devices, the kind that are designed to be in the field for a decade or more, run into a structural problem the firmware fallacy obscures. The connectivity environment they were designed for has a half-life that is almost certainly shorter than their physical service life. A device shipped in 2020 was designed against an SGP.02 universe with single-carrier assumptions and 4G as the default radio. A device being shipped in 2026 is being designed against SGP.32, an emerging IFPP workflow, multi-carrier orchestration as a realistic option, and 5G with NTN satellite on the visible horizon.

The 2020 device has six more years of service life expected. The connectivity environment around it has moved a generation. Firmware updates can keep its application logic current. They cannot move it from SGP.02 to SGP.32. They cannot give it a different SIM. They cannot recertify it for a new band. The architectural decisions made in 2020 are durable, by design.

Three patterns that compound the problem

In practice, the firmware fallacy interacts with three other patterns to produce the worst outcomes I see in the field.

The deferred SIM decision.

A team chooses a removable consumer-grade SIM at launch, often for cost reasons or because it was what the contract manufacturer had on hand. The decision is rationalized as "we'll move to MFF2 in v2." When v2 arrives, the v1 fleet is already 200,000 devices deep and the cost of changing is prohibitive. The architectural debt now lasts as long as the v1 product line does.

The single-carrier launch.

A team launches with one carrier because integrating a second one feels like a distraction. The plan is to add a second carrier later, when scale justifies it. By the time scale justifies it, the platform integration is single-carrier all the way down, and adding a second one means rebuilding the device management layer. The single-carrier decision sticks.

The certification deferral.

A team certifies for one geographic market with the assumption that other markets can be added "when we expand." The eventual expansion uncovers that the device's radio configuration cannot be certified for the new market without a hardware revision. The fleet that was supposed to expand internationally cannot.

In each case, the rhetorical move was the same. A real decision was deferred on the assumption that firmware would handle it later. Firmware, by structure, cannot.

What to do instead

The corrective is not to over-engineer the first version. Over-engineering kills products. The corrective is to be honest with yourself about which decisions are firmware-flexible and which are not, and to invest accordingly in the ones that aren't.

A working heuristic: if a decision touches the radio, the SIM, the eUICC, or the certification regime, treat it as architectural. If it touches anything above that line, treat it as firmware-flexible and let it move quickly. Most teams I work with have this calibration backwards. They labour over application logic that will be replaced three times before the device retires, and they rush through SIM and connectivity decisions that will outlast the entire leadership team.

The right time to think structurally about the connectivity layer is before the first device ships. The second-best time is now, before the next product cycle locks in another five years of debt. Neither of those times will feel convenient. They never do.


ENODA Ventures works with IoT companies on exactly these kinds of architectural decisions, both before launch and during in-flight programs. Read about Managed Connectivity Evolution →