The so-called halving initiative demands a fifty percent reduction in SRG's budget. Politically, this demand is highly controversial. From a technical perspective, however, it acts like a radical load test on a media system that structurally belongs to a different era.
The central question is not whether fifty percent is realistic, but what happens when you put the existing system under maximum efficiency pressure.
Broadcast Architecture versus Platform Architecture
The classic broadcast model is hierarchically structured. Content is planned, produced, approved, broadcast, and archived. Each step is clearly defined, often secured multiple times, and organizationally separated. This model is stable, but slow.
Digital platforms work differently. They are modular, API-driven, and highly automated. Content is no longer produced for a fixed broadcast slot, but as assets that can be played out depending on context. Metadata is almost more important than the content itself.
SRG today moves between these two worlds. Technically, platforms, media libraries, and digital offerings exist. Organizationally, however, the logic of linear programming still dominates in many places. This leads to duplicate workflows, redundant systems, and unnecessary complexity.
Linear Thinking Using the Example of a Classic TV Show
A good example of this is the Tagesschau. Editorially, it clearly fulfills the public service mandate: information, context, and reliability. Technically, however, the classic broadcast logic is very evident here.
Production is strongly oriented toward fixed broadcast times, defined segment lengths, and predetermined dramaturgy. Segments are primarily produced for linear broadcast. Only afterward does secondary use for media libraries, websites, or social media occur.
From a technical perspective, the reverse approach would also be conceivable: topics would be produced as modular building blocks, with clear metadata that can be played out digitally as well as assembled into a linear broadcast. The fact that both worlds are often served in parallel today significantly increases production effort.
Content Production Without Consistent Modularization
This pattern runs through many areas. Content is often conceived as self-contained units rather than as reusable modules. Short clips, long versions, original audio, transcripts, and visualizations don't emerge from a common toolkit but from separate processes.
Digital players massively reduce their effort this way. They produce once and play out multiple times. Without this mindset, resource requirements increase disproportionately.
Podcasts as Digital-First, But Not Always Process-First
Podcasts are often cited as evidence that SRG has arrived in the digital world. An example is Input. The format is thematically focused, clearly digital-first, and specifically targets a younger audience.
Technically, podcasts are comparatively efficient. They require little infrastructure, no broadcast slots, and are flexible in timing. At the same time, many of these productions are still organizationally treated like classic broadcasts, with similar approval processes, committees, and decision-making paths.
This shows a central tension: digital formats exist, but they often still run on analog processes. The technology would be lightweight; the organization isn't always.
Metadata as an Underestimated Efficiency Factor
Especially with podcasts and digital content, metadata is crucial. Titles, descriptions, topic assignments, timestamps, and keywords determine discoverability, archive value, and reusability.
A concrete example: An interview with an expert on the energy transition could appear as a standalone piece, as a clip in a news broadcast, as an audio segment in a podcast, as a quote in an explainer video, and as a transcript in an online dossier. The prerequisite is that the raw material is tagged with clean metadata from the start – who is speaking, about what, in which context, with which rights.
In many historically grown systems, however, metadata is maintained more as an afterthought, often manually and incompletely. This hinders not only automation and personalization, but also data-based playout across different platforms. Content that has been produced once gets stuck in silos because it simply isn't discoverable or correctly classified.
Technically, a clean metadata strategy would be relatively easy to implement – modern content management systems and AI-powered tagging tools make this possible today. Organizationally, however, it's demanding because it means thinking about workflows differently from the start: metadata not as a tedious obligation at the end, but as an integral part of the production process.
Social Media and Platform Logic
The break between technology and organization becomes particularly visible with social media channels. An example is SRF Impact on TikTok. The content is short, quickly consumable, and optimized for algorithmic visibility. Some of it achieves very high reach.
Technically, such channels work fundamentally differently than classic television. A TikTok video has to convince in the first two seconds, otherwise users swipe away. The algorithm rewards reactions, not broadcast times. What works shows up in real-time, and what doesn't work disappears into the noise.
For traditional editorial teams, this is an adjustment. On television, a segment is broadcast once, then it's done. On social media, a topic lives on: it gets commented on, shared, criticized, sometimes for days. This requires not only different content but also different response times. If you react to a viral moment the next day, you've missed it.
At the same time, these channels are often connected to classic editorial teams. This creates tensions. Approval processes, responsibilities, and quality controls come from a world that wasn't built for seconds-long content. If a TikTok video has to go through three hierarchy levels before publication, the moment is over.
The technology is modern; the structure often isn't. This doesn't mean quality control is unnecessary, but it has to work differently. Less pre-approval, more trust in teams, clearer guardrails instead of case-by-case decisions.
Legacy Systems and Technical Debt
Like many large organizations, SRG also carries significant technical debt. Proprietary systems, historically grown broadcast infrastructure, and specialized solutions for individual language regions or formats.
These systems are often stable but expensive to operate and difficult to develop further. Cloud-native architectures, scalable platforms, or automated workflows can only be integrated to a limited extent as long as old systems must be operated in parallel.
Massive budget pressure would make these weaknesses visible—not because technology is lacking, but because too much old technology is being kept alive simultaneously.
Automation as a Cultural Hurdle
Automated transcription, subtitling, translation, content classification, or archiving have long been technically possible. The biggest hurdle lies not in the technology but in acceptance.
An example: Automatic subtitling is now so mature that it works in real-time. For a 30-minute program, this saves several hours of manual work. The same applies to transcripts, which used to be laboriously created and now are available via speech recognition in minutes. Yet many of these tasks are still done by hand, not because the technology fails, but because processes, responsibilities, and quality expectations are designed for manual workflows.
Automation changes role models and decision-making processes. Those who were previously responsible for subtitles wonder what will happen to their position. Those who previously defined quality through manual control must learn to manage spot checks and exceptions instead of reviewing every single case. In publicly funded organizations, this change is particularly sensitive because it is directly linked to jobs and political perception.
At the same time, a reduced budget would effectively make automation a prerequisite. The question is not whether, but how: as a forced clear-cut or as a managed transition.
Multilingualism Between Technology and Organization
Switzerland's multilingualism is a special challenge. Technically, content can now be efficiently translated and adapted. Organizationally, multilingualism is often still understood as parallel production.
An example: The same news story is often prepared separately in each language region today – separate translation, separate graphics, separate presentation. Technically, it would be possible to create shared elements centrally and only adapt the language-specific layer locally. Where exactly the balance lies between efficiency and regional autonomy is a political question – but the technical possibilities have long existed.
This leads to redundant structures. A more platform-oriented approach could enable shared content that is adapted language-specifically. Technically feasible, politically sensitive.
Using Data Without Losing Trust
Digital media work data-driven. Usage, dwell time, and drop-off rates flow directly into decisions. In public service, handling data is delicate, however.
The challenge is to use data without jeopardizing editorial independence or data protection. Technically, this is solvable, but requires clear rules and transparency.
The Halving Initiative as an Architectural Question
In the end, the halving initiative is less a financial than an architectural question. It forces a reassessment of systems, processes, and assumptions, not abstractly, but concretely.
A fifty percent budget reduction cannot be achieved through savings at the margins. It requires fundamental decisions about what constitutes the core of the mandate and what represents historically grown extensions. This is uncomfortable because it means questioning cherished formats, structures, and habits.
Which structures are actually core infrastructure? Information, context, democratic public sphere, these are functions that hardly anyone seriously disputes. But does this include a linear full-service program with entertainment, sports, and fiction? Or would a focused digital basic provision be more contemporary?
Which structures have historically grown? Much of what seems self-evident today originated in an era when broadcasting was the only electronic mass medium. Dedicated orchestras, dedicated archives, dedicated production studios for each language region, all of this once had good reasons. Whether these reasons still apply today is rarely asked.
Where can technology really bring efficiency without losing quality? This is the crucial question. Automation, modularization, and platform thinking can free up resources, but only when implemented consistently. Half-hearted digitalization, where new systems are operated parallel to old ones, costs more rather than less.
In this sense, the halving initiative is less a concrete savings proposal than a thought experiment: What would happen if the system were put under maximum pressure? Which structures would survive, which would collapse? And what does that say about their actual necessity?
Conclusion – Why This Debate Is Necessary, Even Without Easy Answers
The longer I engage with SRG technically and structurally, the clearer it becomes to me: the actual question of this vote is not halving yes or no. It's whether we're ready to consistently detach public service from the logic of classic broadcasting and truly think of it as digital infrastructure.
I fundamentally consider publicly funded media in a direct democracy to be meaningful and necessary. Not as a feel-good offering, not as an entertainment corporation, but as a reliable, independent basis for information, context, and social discourse. Especially in a time of disinformation, algorithmic polarization, and global platforms, we need media actors that aren't primarily dependent on clicks, reach, or advertising markets.
At the same time, I struggle with the idea that this goal inevitably justifies today's structures, processes, and scope of offerings as set in stone.
From a technical perspective, SRG often seems like a system that expends a lot of energy keeping itself stable. Parallel workflows, historically grown responsibilities, redundant productions, and a strong orientation toward linear formats create an inertia that's hardly compatible with the speed of digital media worlds. This isn't individual failure but systemic.
I don't believe that quality automatically depends on budget. Quality depends on clarity. Clarity about the mission, about priorities, and about where you consciously no longer want to be everything at once. Fewer formats, less parallelism, fewer structural redundancies could ultimately even lead to more depth of content.
What I often miss in the debate is the distinction between content and structure. Many defend the existing system because they appreciate individual shows, journalists, or formats. That's understandable. But this appreciation doesn't answer the question of whether the underlying technical and organizational model is still contemporary.
The halving initiative is radically formulated, and yes, it seems crude. But perhaps this very crudeness is necessary to trigger a discussion that would otherwise continue to be postponed. Without pressure, there's little incentive to honestly address technical debt, inefficient processes, and historically grown comfort zones.
Personally, I see the future of public service less in competition for reach or entertainment, but in a clear role as basic digital provision. Content conceived modularly, technically cleanly prepared, multilingually scalable, transparently financed, and organizationally as lean as possible. Not everything for everyone, but the essentials for society.
Whether halving the budget is the right lever is debatable. That SRG must evolve technically, structurally, and conceptually, however, I consider undisputed. This vote is less an endpoint than a beginning.
Not as an attack on public service, but as an invitation to rethink it.
