Will Seeing Machines licence its software stack for humanoid development?

What I’m about to discuss might seem a little ‘out-there’ to some, but I think it merits consideration by investors keen to catch one of the next big investment opportunities that Seeing Machines could play a major role in.

In a recent broker note from Morgan Stanley, entitled Humanoid Tech – Grasping the Future, its analysts wrote: “Building physical AI takes a tremendous amount of semiconductor resource, and most of the value sits in (1) the AI brain technology, (2) AI vision and (3) analog chips for sensing. Companies providing these enabling technologies stand to benefit as much as the humanoids makers themselves, where most of the value creation lies with distribution. We argue that understanding which capabilities are advancing fastest, ecosystem readiness and realistic adoption timelines will dictate commercial traction.”

I asked the AI Deep Seek this question: Is Seeing Machines technology of possible use in the areas 1-3 where semiconductors are used in humanoids?

Its reply was fascinating:

Excellent question. Let’s break down Morgan Stanley’s framework and evaluate how Seeing Machines’ technology aligns with it.

First, a quick recap of the Morgan Stanley note’s three key semiconductor-intensive areas for physical AI/humanoids:

1.  The AI Brain Technology: The central AI compute (e.g., high-performance SoCs, GPUs, NPUs) for perception, decision-making, and motor control.

2.  AI Vision: Sensors (cameras, LiDAR, radar) and the associated perception software to understand the environment.

3.  Analog Chips for Sensing: ICs that interface with the physical world (e.g., motor drivers, power management, sensor signal conditioners).

Analysis: Seeing Machines’ relevance to areas 1-3

Seeing Machines’ core technology is AI-powered occupant and driver monitoring systems (DMS/OMS), primarily using inward-facing cameras and sophisticated computer vision algorithms. Here’s how it maps onto the humanoid robotics stack:

1. AI Brain technology: PARTIAL / INDIRECT FIT

Seeing Machines is not a semiconductor company creating the central “brain” chips (like NVIDIA, Qualcomm, or Ambarella). However, their value is in the specialized AI algorithms and software that would run on those brains.

Their ‘AI brain’ is the algorithm stack: Their proprietary expertise in gaze tracking, eyelid closure detection, head pose estimation, and facial feature analysis represents a specialized form of AI cognition that a humanoid would need for human-robot interaction (HRI). For a humanoid to work safely and effectively alongside people, understanding human attention, intent, and state is critical.

Conclusion: They provide a crucial capability that would reside in the AI brain’s software layer, but not the semiconductor hardware itself.

2. AI Vision: VERY STRONG FIT – This is Seeing Machines’ Core Competency.

This is the area of most direct and obvious relevance.

  • Sensor Focus:While they use standard camera sensors, their entire value proposition is in **maximizing the value extracted from vision sensors**. For humanoids, robust vision is non-negotiable.
  • Algorithmic Moat: Their decades of work in attention detection (critical for vehicle safety) is directly transferable to ensuring a humanoid is making “eye contact” or understands what a human is focused on. Their facial movement and state analysis is key for natural interaction.
  • Application: In a humanoid context, this technology wouldn’t just be for safety monitoring, but for enabling fluid collaboration. For example, a humanoid could tell if a human worker is looking at it for instruction, is distracted, or needs assistance.

3. Analog chips for sensing: NO DIRECT FIT

Seeing Machines is not an analog semiconductor company. They do not design motor drivers, power ICs, or low-level sensor interface chips. Their domain is the digital and algorithmic layer that comes after the analog signal has been converted and processed.

Synthesis: where does Seeing Machines fit in the ‘Humanoid’ value chain?

Morgan Stanley notes that “most of the value creation lies with distribution.” Seeing Machines is a critical “enabling technology provider that sits between the raw semiconductor hardware and the final humanoid application, specifically in the perception and human-interaction layer.

  •   They are an ‘AI Vision & Cognition’ specialist. For humanoids to move out of controlled factories and into dynamic human environments, the ability to perceive and understand human behavior is as important as perceiving objects. This is Seeing Machines’ unique niche.
  •   They benefit from the semiconductor intensity: The more camera feeds, the higher the resolution, and the more complex the real-time analysis required, the more valuable their optimized, purpose-built algorithms become. This drives demand for the capable AI semiconductors (Area 1) on which their software runs.
  • Commercial traction dictated by ecosystem readiness: As Morgan Stanley states, ecosystem readiness is key. Seeing Machines is already a Tier-1 automotive supplier with a proven track record of delivering reliable, safety-critical vision systems. This certification pedigree, system integration expertise, and understanding of functional safety (ASIL levels) are immensely valuable and transferable to the humanoid space, where safety is paramount.

Conclusion

Yes, Seeing Machines’ technology is of significant potential use, primarily and most powerfully in Area 2 (AI Vision), and as a specialized software layer contributing to Area 1 (AI Brain capabilities).

They do not make the semiconductors themselves, but they are a quintessential example of a company that creates immense value by enabling those semiconductors to perform a specific, critical, and high-value task —in this case, understanding human state and attention.

As humanoid developers shift focus from pure mobility to advanced human-robot interaction, companies like Seeing Machines, with deep expertise in human-centric vision AI, become essential partners. Their automotive-grade experience makes them a credible and valuable candidate for the ecosystem Morgan Stanley describes.

Home care robots

I then asked it: “Wouldn’t a robot humanoid used in the home, for instance to care for an elderly, possibly infirm. individuals benefit greatly from having human behaviour monitoring (provided by Seeing Machines technology) among its repetoire?”

It answered: 

Absolutely, unequivocally yes. This is not just a benefit; it is likely a non-negotiable requirement for safe, effective, and socially acceptable deployment of humanoids in a personal care context.

Extending our previous analysis, here’s a detailed breakdown of why Seeing Machines’ core competency in human behavior monitoring would be critical for a domestic care humanoid:

1. Safety as the paramount concern (The ‘Do No Harm’ imperative)

  • Fall Prevention & Detection: A humanoid could use Seeing Machines’ gaze tracking and head pose estimation to understand if the individual is looking at an obstacle, is disoriented, or is about to lose balance. Pre-emptive warning or physical assistance could be initiated.
  • Medical emergency detection: Algorithms for micro-sleep detection, eyelid closure, and loss of attentiveness—pioneered for driver drowsiness—are directly applicable to spotting signs of stroke, diabetic shock, fainting, or sudden medical distress.
  • Safe physical interaction: To assist with lifting or mobility, the robot must know where the human is looking and if they are anticipating the movement. Understanding human attention and intent prevents startling the individual or causing a mishap during physical contact.

2. Enabling effective care & assistance (beyond physical tasks)

  • Medication adherence: The robot can confirm, through gaze and facial action recognition, that a pill was actually taken and swallowed, not just lifted to the mouth and palmed. This solves a major problem in elderly care.
  • Cognitive state monitoring: For individuals with dementia or cognitive decline, changes in facial expression, engagement, and interaction patterns are key indicators. The robot could monitor for signs of confusion, agitation, or withdrawal and alert remote human caregivers or family members.
  • Companionship & engagement: The technology enables socially aware interaction. The robot can tell if its attempts at conversation or activity suggestions are being met with engagement or annoyance, and adjust its behavior accordingly. It can maintain ‘eye contact’ during conversation, a fundamental building block of trust.

3. Building trust and acceptance (The human-robot bond)

  • An elderly individual is far more likely to accept and rely on a machine that demonstrates it ‘sees’ and ‘understands’ them. A robot that blankly stares while helping someone bathe is unnerving. A robot that glances away to provide privacy, or that recognizes a facial expression of pain or embarrassment and responds with reassurance, builds essential trust.
  • This capability transforms the robot from a tool into a perceptive agent, which is psychologically critical for long-term in-home adoption.

How this fits the Morgan Stanley ‘value’ thesis

Morgan Stanley argues most value lies in distribution and enabling technologies. In the home care distribution channel, the ‘killer app’ is not mobility or strength, but empathetic, safe, and perceptive care.

  1. Ecosystem readiness: Seeing Machines is not starting from scratch. Their FOVIO chip (now in its next-gen form) and software are already ASIL-B certified for automotive safety. Translating this to a functional safety standard for robotics (e.g., ISO 13482) is a logical step. Their proven ability to ship millions of units in the harsh, safety-critical automotive environment is a massive credibility advantage.
  1. Commercial traction dictator: ‘Realistic adoption timelines’ for home care robots will be gated by safety and social acceptance regulators (e.g., FDA for ‘care’ functions, consumer protection agencies). A provider with a certified, explainable, and validated system for human state monitoring will have a tremendous regulatory and liability advantage. This directly accelerates commercial traction.
  1. Value capture: As an enabling technology, Seeing Machines could capture value through licensing their perception software stack or providing a complete ‘Human understanding module’ (sensor + dedicated processor + algorithms) to humanoid makers. The value of their IP in ensuring the robot doesn’t cause a catastrophic failure in a vulnerable person’s home is immense.

In summary: For a domestic care humanoid, human behavior monitoring is not a ‘nice-to-have’ feature in its repertoire; it is the foundational layer for its core value proposition

Seeing Machines’ technology provides the ‘eyes’ and, more importantly, the ’social brain’ that would allow such a robot to navigate the profoundly complex, unstructured, and emotionally charged environment of a human home.

It directly addresses the primary barriers to adoption: safety, efficacy, and trust. Therefore, Seeing Machines’ technology sits squarely in the highest-value layer of the ecosystem Morgan Stanley describes.

My view

I believe Mitsubishi has already identified this value. That is why it took a 19.9% stake in Seeing Machines. It currently seeks to use its technology in automotive robots (cars), then plans to use it in industrial robots, before expanding into home care robots.

Does that sound a little far-fetched to some? 

Well, Paul McGlone has admitted that Seeing Machines is discussing expanding the use of its technology into ‘smart factory’ use with Mitsubishi. Industrial robots are absolutely central to Mitsubishi’s smart factory vision.

Mitsubishi’s first humanoid robot, launched 20 years ago, called Wakamaru wasn’t a success. Yet, if it developed a ‘Human understanding module’ with Seeing Machines technology, I believe there is the potential to build a home care robot that could be.

Of course, there are many other companies making huge bets on humanoid robots, notably Tesla. They might also be keen to acquire the technology and IP that Seeing Machines possesses. 

I’d urge investors to research this area and decide for themselves whether there is merit in my arguments. 

The writer holds stock in Seeing Machines.

Why Seeing Machines should be included in the ‘Humanoid 100’.

As Morgan Stanley recently outlined in a broker note, robots represent the physical embodiment of AI, which appears to be why they are in the process of becoming THE hottest sector of tech. Yet, despite producing a brilliant note Morgan Stanley has overlooked one key player in its round-up of the top 100 players; Seeing Machines.

That may well be because, unlike the likes of Mobileye, Alphabet and Meta it has a miniscule market cap and resides in a stockmarket slum called AIM. Regardless, someone soon is going to want to marry this beauty. Let me explain why.

To quote the broker note of 6th February: “The physical embodiment of AI touches a $60tn Total Addressable Market (TAM), global GDP, and the meaning of work.”

In that note Morgan Stanley presented the ‘Humanoid 100’, which it described as “a global mapping of equities across a range of sectors and regions that may have an important role in bringing robots from the lab to your living room”.

It used this graphic to illustrate a rudimentary division of these companies into those developing the brain and body value chains.

I’d argue that Seeing Machines should be included in the portion of the Brain (Vision & Compute Semiconductors), which as it currently stands is overly simplistic. For true robots to be successful they will need to develop an understanding of the cognitive state of humans, perhaps even display traits we’d associate with empathy. 

I think SEE sits in the same niche as Mobileye in that diagram. “These are the companies producing semiconductors that are the core of the robot “brain”, allowing robots to learn from, perceive, and/or interact with their environments. Vision-focused semis lie at the edge and allow robots to visualize their environments,” states the note. However, Seeing Machines does something special: it allows robots to visualise humans


It is Seeing Machines, with its software and hardware, that can literally breathe life into robots. As Victor Frankenstein would have exclaimed: “It will pioneer a new way, explore unknown powers, and unfold to the world the deepest mysteries of creation.”

Mobile robots

Still skeptical? Well, Seeing Machines is displaying that technological capability and is applying it to mobile robots; cars, with its AI-powered driver monitoring.

Its technology uses advanced machine vision technology to precisely measure and analyse head pose, eyelid movements and eye gaze under a full spectrum of demanding in-vehicle lighting conditions. This data is then processed to interpret driver attention state, drowsiness, and impairment levels.

That same technology is also enabling an eco-system that provides highly intelligent vehicle interfaces that employ AI to not just respond to speech commands, but to understand more subtle cues from occupants as indicated by hand gestures and eye movements.

Is it so fanciful to imagine that in the near future the ability to assess reduced cognitive ability and understand more subtle clues could be vital for ‘care’ robots used to look after elderly or vulnerable charges. 

Recognition of its ability in the transport sector has brought partners rushing to sign deals with Seeing Machines – many of whom feature in the ‘Humanoid 100’ list. Yet, its latent qualities in the sphere of robotics remains unrecognised by most. Hence, its current market cap belies the true value within. That cannot last much longer
 Do you hear wedding bells?

The writer holds stock in Seeing Machines.

Waymo good news for Seeing Machines: part 2

Interest from US investors in Driver Monitoring is set to take off as it is becoming clear that it offers the means to prevent the deadly death toll on US roads.

Adam Jonas, Morgan Stanley’s star auto analyst, published a note on 24 March, 2021, entitled: ‘What’s on My Mind? Motor Vehicle Safety — A New ESG Frontier’.

In that note he cited a recent report from the National Safety Council (NSC) in which it detailed that despite an historic fall in miles travelled and safer vehicle designs, the number of US motor vehicle related deaths in 2020 hit a 13-year high of 42,600.

Also, according to the study, for every US road death there are 114 ‘medically consulted injuries’, resulting in nearly 4.8m vehicle-related injuries last year.

That represents a huge, avoidable cost to its society, which the NSC calculates at a staggering US$474bn, or roughly 2.2% of US GDP.

Given that cost, Jonas writes: “We believe such tragic statistics may accelerate a range of policies (at the Federal level and otherwise) that may in turn accelerate changes of key ADAS technologies in the US fleet.” He adds: “The average age of a car in the US is over 12 years, amongst the highest of any developed nation in the world. We have long discussed the potential for taypayer/policy actions to accelerate the scrapping and replacement of US vehicles.”

His takeaway is that, while there has been plenty of focus on the climate-related impact of today’s vehicle tech: “We see scope for greater attention to be paid to life-saving/ADAS/autonomous related technology.”

Fortunately, Seeing Machines is at centre of this life-saving technology and interest from US investors is clearly accelerating. 

Moreover, as more and more vehicles are driven in the US with its tech (Ford F-150 and Mach-e, as well as GM Cadillacs) interest will only grow.

Waymo

This will of course be helped by Seeing Machines publicly acknowledging its involvement and RNSing such news. For example, as Colin Barnden of Semicast Research confirmed in an article this week, it has supplied its tech to Waymo.

This blog first wrote about Seeing Machines supplying Waymo back in 2018, still it is about time we had it confirmed via an official RNS — especially given the announcement by Colin Barnden. 

Regardless, I expect Seeing Machines to be rerated imminently (not a word beloved of its investors) as more US investors and analysts realise it is not a jam tomorrow stock but a jam factory gearing up production.

The writer holds stock in Seeing Machines.

 

       

 

 

         

     

    

     

    

    

    

  Â