@trupples — Thank you for taking the time to write this incredibly thorough analysis. This is exactly the kind of detailed, hard-hitting technical feedback that makes open standards actually usable in the real world.
I’ll address your points directly against our current v2.0 spec:
1. Physical Robot “Might” (kinematics & size)
You nailed it. The current spec uses TTC (Time-to-Collision = distance / closing_velocity) as the primary determinant. While TTC implicitly accounts for velocity, you’re totally right that mass and kinematic capability are currently ignored. A 500kg industrial arm at 0.1 m/s carries a massive amount of kinetic energy (E_k = ½mv²) compared to a 3kg Roomba at 1 m/s, even though the Roomba might have a shorter TTC.
This is a known blind spot we are tracking for v2.1. Our current thinking is to introduce an optional platform_risk_class (essentially a Kinetic Energy Multiplier based on ISO 13482 categories) that offsets the TTC thresholds. A heavyweight arm would simply trigger CARE at a much longer TTC than a lightweight mobile platform. The golden rule here: any modifier must remain strictly physics-based, never demographic-based.
2. State Orthogonality (Confidence × Operating States)
This is probably the most critical implementation gap you’ve pointed out. Extended States (MED_CONF, LOW_CONF, INTEGRITY) are designed as overlays, not replacements. But what does that actually look like?
If we just try to blend colors (e.g., mixing an Amber Core State with a Cyan Extended State on a physical LED ring), we get washed-out “visual mud” on standard hardware. That helps no one. For v2.1, we will explicitly define Spatial Multiplexing rules. For example: the front 80% of a light array displays the Core State (e.g., INTENT amber flow), while the rear 20% pulses the Extended State (e.g., MED_CONF cyan shimmer). For the audio channel, the MED_CONF query tone would play in the gaps between the INTENT hum cycles. I’m opening a GitHub issue for a dedicated “State Composition” section to nail this down.
3. IDLE State Semantics (“doing nothing” vs. “no humans around”)
Good observation. In v2.0, IDLE triggers when no human is within 50 meters. From the human’s perspective, there’s no practical difference — if you’re not there, you don’t see the signal. The breathing pulse exists primarily as a system health heartbeat for operators and to provide a smooth animation anchor when a human does enter the zone (IDLE → AWARENESS).
For AMR fleets in warehouses, I can see the argument for splitting this into STANDBY and UNOCCUPIED. We initially decided against it to avoid state-bloat without a direct safety benefit, but I’m definitely open to revisiting this if the AMR community needs it.
4. Photosensitive Epilepsy & Accessibility
Incredibly important catch. Just to quickly clarify the v2.0 spec: THREAT is designed as a de-escalating “White Breathing” glow (the design intent is to be non-aggressive when the robot is physically threatened). However, you are absolutely right about the CRITICAL state. It currently specifies a 3.0 Hz red strobe, which sits right at the danger threshold for photosensitive epilepsy according to the Harding test.
This is a genuine conflict between safety urgency and accessibility. For v2.1, we will cap the CRITICAL strobe at a strict 2.0 Hz maximum. We’re also exploring asymmetric pulse patterns (like short-short-pause) that convey high urgency without triggering standard epilepsy thresholds. We will add a dedicated accessibility compliance section referencing ISO 8596. Thank you for flagging this!
5. Heterogeneous Sensor Fusion
The spec handles this partially with general confidence thresholds, but your G1 example (high-res front, low-res rear) highlights a missing piece: directional confidence.
Currently, the conservative default would be to report the lowest overall confidence across all sectors (triggering MED_CONF globally). But you’re right, that’s inefficient. This ties perfectly into the Spatial Multiplexing concept from Point 2: v2.1 needs to support per-sector confidence reporting, so a robot can show normal Core States on its front LEDs while the rear LEDs indicate degraded sensing.
6. Expressing Complex Intentions (AMR-specific)
Great reference to the OTTO AMR light patterns. This brings up an important architectural boundary for the protocol: LSEP is strictly designed as a safety- and state-awareness layer, not a navigation telemetry substitute.
Turning, docking sequences, and path-following are operational telemetry, which are functionally orthogonal to a robot’s safety state. A robot can absolutely display an LSEP AWARENESS state on its main torso while simultaneously using standard directional indicators on its chassis to signal a turn. We want to keep LSEP lean, modular, and focused purely on human-robot safety transparency, rather than trying to swallow every possible operational behavior into a single standard.
Regarding the lsep.org Firefox bug — confirmed, the Framer JS is acting up. GitHub remains our canonical source of truth anyway. And noted regarding the analogy in the previous comment — sticking strictly to the spec moving forward!
Next steps: I’m creating GitHub issues for Points 1, 2, 4, and 5 right now to track them for the v2.1 release. Your feedback directly shapes this standard. We are currently forming the LSEP Alliance (a consortium of engineers working on HRI standardisation) — if you’re interested, we would love to have your voice in the mix.
Full spec is here: github.com/NemanjaGalic/LSEP