Hey Jim! How can i help you?
" Take me to the nearest Starbucks please "
Give me a moment, I will plan your trip!
Shortest route is 29 minutes away...
We will use metro 10 and metro 8.
" Hey MiMi"
" OK MiMi, Engage "
"Take me home"
a guiding dog...
SeeingRobot addresses a problem that impacts more than 250 million people around the world: Autonomy in mobility for both partially and completely visually impaired (VI) people.
SeeingRobot was able to bring to light in April 2019 a concept of autonomous cane for visually impaired and blind people. It’s called MiMi.
MiMi is an assistive technology that aims to bring back autonomy for VIs in their day to day mobility. It integrates several cutting-edge technologies such as robotics, deep learning and machine learning to solve four principal challenges:
Going to unknown places while having a sense of the surrounding environment
Detect and avoid obstacles
Detect and recognize guiding signs and d) to move in full security and comfort.
MiMi is developed to assist a VI to go from point A (current location) to point B (final destination) with very little intervention. MiMi uses maps services and integrates with smart city infrastructure through APIs to communicate and exchange information about traffic, public transport, bus stops, metro stations, taxi services etc.
Ideally, through a voice command, user will input the address where he/she wants to go; MiMi locates itself, plans the trip with best route possible. then engage after getting a confirmation. During the trip, MiMi will find automatically its way to sidewalks or other pedestrians walking areas by tracing possible trajectories based on trained models and satellite images.
When encountering obstacles/dangers, MiMi will use its state-of-the-art stereo camera to measure distance to the objects (static/dynamic), provide shape, size and other relative information as input to calculate the maneuvering trajectory required to avoid the obstacle safely and fluidly.In addition, it detects traffic signs and lights to facilitate and assist VIs in crossing intersections or finding stairs and elevators. Simultaneously, MiMi predicts movement of dynamic objects such as cars, flow of people to increase users safety during navigation.
Because we are living in an ever-changing environment, sometimes street works or perturbation in metro operations can force us to take another direction exceptionally. MiMi identifies navigation updates and road signs to find alternative routes and inform users about new directions.
Using on-board camera and SLAM technology, worldwide 3D maps is generated and users can save and pinpoint certain locations and later trade them. Data such as “Trips”, “Cities”, “POIs”,etc. will be accessible via MiMi’s e-store that will be developed in the future.
We call it process Zero. MiMi uses a combination of sensors and cameras to collect data points and generate 3D maps for the surrounding environment and memorizes frequently used routes. This feature optimizes calculation performance every time the user visits the same place again.
MiMi is also designed to enable users to share 3D maps of places and defined routes with the community.
The user can also discover new places and register POIs on the World Map and gain loyalty points.
MiMi recognizes obstacles (static or moving) and trace trajectories real-time.
Various classes of objects are detected and classified using state of the art CNNs and machine learning algorithms
Also, useful information such as road intersections, traffic lights, road signs, stairs, bus station, building entrance are directly given to blind people in order to have sense of surrounding environment and navigate more safely and securely.