Perspectives Development Update: Interrogation of Blind Proposals

I've rebuilt how debates work in Perspectives. The system now runs structured interrogations of blind proposals rather than open-ended discussion threads. The change addresses fundamental problems with how debates generate useful analysis.
The Problem with Threaded Debates
The previous system ran blind proposals followed by a threaded debate, where personas were encouraged to respond directly to the previous speakers. This created organic discussion but made analysis difficult. Personas would engage with topics unevenly. Some proposals would attract extensive criticism whilst others barely got challenged. The debate transcripts were interesting but the data was too messy to extract clear patterns.
More importantly, the system couldn't identify which arguments held up under scrutiny and which collapsed. When Pragmatist challenges Idealist's proposal, does the response actually address the concern or sidestep it? The old format gave no structured way to answer that question.
How Interrogation Works
The new system runs interrogations immediately after blind proposals. For each of the proposals, the system selects three challengers based on framework opposition. Each challenger receives a specific analytical dimension to probe.

The challenger submits targeted questions about that dimension. The proposal author responds to all three challenges, they can either concede or defend their proposal. If defended, the challenger then evaluates whether their challenge was successfully defended against, or remains fundamentally disputed. This creates structured data about which aspects of each proposal are robust and which are vulnerable.
The system tracks verdicts across all interrogations. A proposal that successfully defends against most challenges demonstrates resilience. A proposal that attracts many disputed verdicts indicates fundamental disagreement that discussions should address.
Why This Generates Better Analysis
The interrogation protocol creates quantifiable data about proposal strength. The analysis report can now identify which decision dimensions have the most unresolved conflict, which stakeholder perspectives align or diverge, and which information gaps prevent resolution.
The protocol also forces more honest assessment. In open debate, personas naturally defend their frameworks. When a challenger issues a verdict after seeing the response, they're evaluating whether the author actually addressed their concern rather than whether they agree with the conclusion. This produces more accurate signals about argument quality.
The Discussion Phase
After interrogations complete, the system calculates tension levels based on disputed verdict counts. High tension (many fundamental disagreements) triggers longer discussion phases where personas address the most contentious dimensions directly. Low tension (most challenges defended or conceded) runs shorter discussions.
This adaptive approach allocates discussion time where genuine uncertainty exists rather than forcing extended debate on decisions with clear patterns.
As this adds slightly less value than the interrogation system, it can be skipped by using the new “Fast Mode” (think of this as an inverse “Thinking Mode”, the system as a whole finishes much faster, but at the cost of a slightly less useful output).

Implementation Changes
The interrogation phase requires approximately 32 API calls per debate (8 proposals × 4 calls: challenger selection, challenge generation, author response, verdict evaluation). The calls run sequentially because verdicts depend on responses which depend on challenges. This adds processing time but creates substantially more structured data for analysis.
The frontend displays interrogations through expandable persona sections. Each section shows three challenger rows with verdict icons (defended, conceded, disputed). Clicking a row reveals the full challenge text, response, and verdict reasoning in the main panel.

What This Enables
This protocol creates analysable data about argument resilience, stakeholder alignment, decision dimensions, and information requirements. This feeds directly into analysis reports that map these patterns to actionable decision support.
The system now tracks which frameworks challenge which proposals most effectively, which dimensions attract the most dispute, and which information would resolve remaining conflicts. None of this was extractable from free-flowing debate transcripts.
The protocol also establishes foundation for future analysis improvements. When verdict patterns indicate specific information gaps, the system could potentially trigger targeted research or suggest follow-up interrogations on particular dimensions.
Additionally, the system can take advantage of higher concurrency models, meaning that if a provider allows the use of a model 5 times simultaneously, for example, the debate will complete much quicker, as all these slots are being used, and because the discussion phase is a lot shorter on average, far less time is spent waiting for this sequential operation to complete (only 1 of 5 slots can be used when the debate is in the discussion phase).
Trade-Offs
The structured approach sacrifices some organic discussion flow. Personas respond to direct challenges rather than building on each other's points naturally. The discussion debate attempts to recover this through discussion of disputed challenges, but it's less spontaneous than continuous threaded conversation.
Looking Forward
The interrogation protocol establishes groundwork for analysis improvements that depend on structured verdict data. I'm exploring how to better visualise the patterns that emerge from interrogations (which dimensions create most conflict, which proposals survive scrutiny, which frameworks align or oppose).
The system is live at getperspectives.app. Escape the echo chamber.




