What Iâm about to discuss might seem a little âout-thereâ to some, but I think it merits consideration by investors keen to catch one of the next big investment opportunities that Seeing Machines could play a major role in.
In a recent broker note from Morgan Stanley, entitled Humanoid Tech â Grasping the Future, its analysts wrote: “Building physical AI takes a tremendous amount of semiconductor resource, and most of the value sits in (1) the AI brain technology, (2) AI vision and (3) analog chips for sensing. Companies providing these enabling technologies stand to benefit as much as the humanoids makers themselves, where most of the value creation lies with distribution. We argue that understanding which capabilities are advancing fastest, ecosystem readiness and realistic adoption timelines will dictate commercial traction.â
I asked the AI Deep Seek this question: Is Seeing Machines technology of possible use in the areas 1-3 where semiconductors are used in humanoids?
Its reply was fascinating:
Excellent question. Let’s break down Morgan Stanley’s framework and evaluate how Seeing Machines’ technology aligns with it.
First, a quick recap of the Morgan Stanley note’s three key semiconductor-intensive areas for physical AI/humanoids:
1. The AI Brain Technology: The central AI compute (e.g., high-performance SoCs, GPUs, NPUs) for perception, decision-making, and motor control.
2. AI Vision: Sensors (cameras, LiDAR, radar) and the associated perception software to understand the environment.
3. Analog Chips for Sensing: ICs that interface with the physical world (e.g., motor drivers, power management, sensor signal conditioners).
Analysis: Seeing Machines’ relevance to areas 1-3
Seeing Machines’ core technology is AI-powered occupant and driver monitoring systems (DMS/OMS), primarily using inward-facing cameras and sophisticated computer vision algorithms. Hereâs how it maps onto the humanoid robotics stack:
1. AI Brain technology: PARTIAL / INDIRECT FIT
Seeing Machines is not a semiconductor company creating the central “brain” chips (like NVIDIA, Qualcomm, or Ambarella). However, their value is in the specialized AI algorithms and software that would run on those brains.
Their âAI brainâ is the algorithm stack: Their proprietary expertise in gaze tracking, eyelid closure detection, head pose estimation, and facial feature analysis represents a specialized form of AI cognition that a humanoid would need for human-robot interaction (HRI). For a humanoid to work safely and effectively alongside people, understanding human attention, intent, and state is critical.
Conclusion: They provide a crucial capability that would reside in the AI brain’s software layer, but not the semiconductor hardware itself.
2. AI Vision: VERY STRONG FIT – This is Seeing Machines’ Core Competency.
This is the area of most direct and obvious relevance.
- Sensor Focus:While they use standard camera sensors, their entire value proposition is in **maximizing the value extracted from vision sensors**. For humanoids, robust vision is non-negotiable.
- Algorithmic Moat: Their decades of work in attention detection (critical for vehicle safety) is directly transferable to ensuring a humanoid is making “eye contact” or understands what a human is focused on. Their facial movement and state analysis is key for natural interaction.
- Application: In a humanoid context, this technology wouldn’t just be for safety monitoring, but for enabling fluid collaboration. For example, a humanoid could tell if a human worker is looking at it for instruction, is distracted, or needs assistance.
3. Analog chips for sensing: NO DIRECT FIT
Seeing Machines is not an analog semiconductor company. They do not design motor drivers, power ICs, or low-level sensor interface chips. Their domain is the digital and algorithmic layer that comes after the analog signal has been converted and processed.
Synthesis: where does Seeing Machines fit in the âHumanoidâ value chain?
Morgan Stanley notes that “most of the value creation lies with distribution.” Seeing Machines is a critical “enabling technology provider that sits between the raw semiconductor hardware and the final humanoid application, specifically in the perception and human-interaction layer.
- They are an âAI Vision & Cognitionâ specialist. For humanoids to move out of controlled factories and into dynamic human environments, the ability to perceive and understand human behavior is as important as perceiving objects. This is Seeing Machines’ unique niche.
- They benefit from the semiconductor intensity: The more camera feeds, the higher the resolution, and the more complex the real-time analysis required, the more valuable their optimized, purpose-built algorithms become. This drives demand for the capable AI semiconductors (Area 1) on which their software runs.
- Commercial traction dictated by ecosystem readiness: As Morgan Stanley states, ecosystem readiness is key. Seeing Machines is already a Tier-1 automotive supplier with a proven track record of delivering reliable, safety-critical vision systems. This certification pedigree, system integration expertise, and understanding of functional safety (ASIL levels) are immensely valuable and transferable to the humanoid space, where safety is paramount.
Conclusion
Yes, Seeing Machines’ technology is of significant potential use, primarily and most powerfully in Area 2 (AI Vision), and as a specialized software layer contributing to Area 1 (AI Brain capabilities).
They do not make the semiconductors themselves, but they are a quintessential example of a company that creates immense value by enabling those semiconductors to perform a specific, critical, and high-value task âin this case, understanding human state and attention.
As humanoid developers shift focus from pure mobility to advanced human-robot interaction, companies like Seeing Machines, with deep expertise in human-centric vision AI, become essential partners. Their automotive-grade experience makes them a credible and valuable candidate for the ecosystem Morgan Stanley describes.
Home care robots
I then asked it: âWouldn’t a robot humanoid used in the home, for instance to care for an elderly, possibly infirm. individuals benefit greatly from having human behaviour monitoring (provided by Seeing Machines technology) among its repetoire?â
It answered:
Absolutely, unequivocally yes. This is not just a benefit; it is likely a non-negotiable requirement for safe, effective, and socially acceptable deployment of humanoids in a personal care context.
Extending our previous analysis, hereâs a detailed breakdown of why Seeing Machines’ core competency in human behavior monitoring would be critical for a domestic care humanoid:
1. Safety as the paramount concern (The âDo No Harmâ imperative)
- Fall Prevention & Detection: A humanoid could use Seeing Machines’ gaze tracking and head pose estimation to understand if the individual is looking at an obstacle, is disoriented, or is about to lose balance. Pre-emptive warning or physical assistance could be initiated.
- Medical emergency detection: Algorithms for micro-sleep detection, eyelid closure, and loss of attentivenessâpioneered for driver drowsinessâare directly applicable to spotting signs of stroke, diabetic shock, fainting, or sudden medical distress.
- Safe physical interaction: To assist with lifting or mobility, the robot must know where the human is looking and if they are anticipating the movement. Understanding human attention and intent prevents startling the individual or causing a mishap during physical contact.
2. Enabling effective care & assistance (beyond physical tasks)
- Medication adherence: The robot can confirm, through gaze and facial action recognition, that a pill was actually taken and swallowed, not just lifted to the mouth and palmed. This solves a major problem in elderly care.
- Cognitive state monitoring: For individuals with dementia or cognitive decline, changes in facial expression, engagement, and interaction patterns are key indicators. The robot could monitor for signs of confusion, agitation, or withdrawal and alert remote human caregivers or family members.
- Companionship & engagement: The technology enables socially aware interaction. The robot can tell if its attempts at conversation or activity suggestions are being met with engagement or annoyance, and adjust its behavior accordingly. It can maintain âeye contactâ during conversation, a fundamental building block of trust.
3. Building trust and acceptance (The human-robot bond)
- An elderly individual is far more likely to accept and rely on a machine that demonstrates it âseesâ and âunderstandsâ them. A robot that blankly stares while helping someone bathe is unnerving. A robot that glances away to provide privacy, or that recognizes a facial expression of pain or embarrassment and responds with reassurance, builds essential trust.
- This capability transforms the robot from a tool into a perceptive agent, which is psychologically critical for long-term in-home adoption.
How this fits the Morgan Stanley âvalueâ thesis
Morgan Stanley argues most value lies in distribution and enabling technologies. In the home care distribution channel, the âkiller appâ is not mobility or strength, but empathetic, safe, and perceptive care.
- Ecosystem readiness: Seeing Machines is not starting from scratch. Their FOVIO chip (now in its next-gen form) and software are already ASIL-B certified for automotive safety. Translating this to a functional safety standard for robotics (e.g., ISO 13482) is a logical step. Their proven ability to ship millions of units in the harsh, safety-critical automotive environment is a massive credibility advantage.
- Commercial traction dictator: âRealistic adoption timelinesâ for home care robots will be gated by safety and social acceptance regulators (e.g., FDA for âcareâ functions, consumer protection agencies). A provider with a certified, explainable, and validated system for human state monitoring will have a tremendous regulatory and liability advantage. This directly accelerates commercial traction.
- Value capture: As an enabling technology, Seeing Machines could capture value through licensing their perception software stack or providing a complete âHuman understanding moduleâ (sensor + dedicated processor + algorithms) to humanoid makers. The value of their IP in ensuring the robot doesn’t cause a catastrophic failure in a vulnerable person’s home is immense.
In summary: For a domestic care humanoid, human behavior monitoring is not a ânice-to-haveâ feature in its repertoire; it is the foundational layer for its core value proposition.
Seeing Machines’ technology provides the âeyesâ and, more importantly, the âsocial brainâ that would allow such a robot to navigate the profoundly complex, unstructured, and emotionally charged environment of a human home.
It directly addresses the primary barriers to adoption: safety, efficacy, and trust. Therefore, Seeing Machinesâ technology sits squarely in the highest-value layer of the ecosystem Morgan Stanley describes.
My view
I believe Mitsubishi has already identified this value. That is why it took a 19.9% stake in Seeing Machines. It currently seeks to use its technology in automotive robots (cars), then plans to use it in industrial robots, before expanding into home care robots.
Does that sound a little far-fetched to some?
Well, Paul McGlone has admitted that Seeing Machines is discussing expanding the use of its technology into âsmart factoryâ use with Mitsubishi. Industrial robots are absolutely central to Mitsubishi’s smart factory vision.
Mitsubishiâs first humanoid robot, launched 20 years ago, called Wakamaru wasnât a success. Yet, if it developed a âHuman understanding moduleâ with Seeing Machines technology, I believe there is the potential to build a home care robot that could be.
Of course, there are many other companies making huge bets on humanoid robots, notably Tesla. They might also be keen to acquire the technology and IP that Seeing Machines possesses.
Iâd urge investors to research this area and decide for themselves whether there is merit in my arguments.
The writer holds stock in Seeing Machines.
