Ground Detection in Unity: Instantiate a 3D Object on the Detected Ground or Surface

Ground Detection in Unity: Instantiate a 3D Object on the Detected Ground or Surface

This tutorial is about ground detection in Unity. We will learn how to detect the ground and other surfaces and use them to display our robot's model in augmented reality AR. Ground detection in Unity and using it to display a 3D object in augmented reality AR The contents of the entire augmented reality training course can be found at the link below: https://www.mecharithm.com/introduction-to-augmented-reality-ar-course/ [Important] augmented reality source Code: The source code for the entire tutorials can be downloaded HERE! Watch the video version of this lesson: At the link below, you will find the corresponding lessons on augmented reality…
Read More
Image Target to Demonstrate a 3D Object in AR Foundation

Image Target to Demonstrate a 3D Object in AR Foundation

In this lesson, we will learn how to detect an image in augmented reality (AR) and its applications. In fact, we will use an image target as a position reference for a 3D model in AR foundation or use it as a trigger for one or more functions in the program. Using an image target to demonstrate a 3D UAV in AR Foundation The contents of the entire augmented reality training course can be found at the link below: https://www.mecharithm.com/introduction-to-augmented-reality-ar-course/ [Important] augmented reality source Code: The source code for the entire tutorials can be downloaded HERE! Watch the video version of…
Read More
How to Download and Setup Unity Engine and AR Foundation SDK 

How to Download and Setup Unity Engine and AR Foundation SDK 

This lesson will provide the requirements for an augmented reality project, such as how to download and setup the Unity Engine and the AR SDK.  As part of this lesson, we will download and install Unity and all of the necessary plugins, as well as install the AR SDK in our project. The contents of the entire augmented reality training course can be found at the link below: https://www.mecharithm.com/introduction-to-augmented-reality-ar-course/ [Important] augmented reality source Code: The source code for the entire tutorials can be downloaded HERE! Watch the video version of this lesson: At the link below, you will find the…
Read More
What is a collaborative robot used for?

What is a collaborative robot used for?

In this short answer, we answer the question about what a collaborative robot is asked by one of our beloved followers. For a long time, industrial robots, which were primarily programmable robotic arms, were put in a cage separated from human workers because they were not safe to be used in proximity of humans. They were mainly used to conduct dangerous tasks unsafe for human workers. For a long time, industrial robots were put in a cage separated from human workers. Fast forward several decades, with the rise of the Internet of Things (IoT), Machine Learning (ML), advanced sensors, and…
Read More
Robot Grasping in a Heavily Cluttered Environment

Robot Grasping in a Heavily Cluttered Environment

Korea Advanced Institute of Science and Technology (KAIST) student Dongwon Son has recently published interesting research about reactive grasping in a heavily cluttered environment in IEEE Robotics and Automation Letters. Reactive robot grasping in a Heavily cluttered environment. Courtesy of Samsung Research and Dongwon Son. This study proposed a closed-loop framework for predicting the six-degree-of-freedom (dof) grasp in a heavily cluttered environment using vision observations. prediction results in robot grasping. Courtesy of Samsung Research and Dongwon Son. Experimental results on a robot in an environment with a lot of clutter showed that the grasping success rate had improved quantitatively compared…
Read More
Introduction to Augmented Reality (AR) Course

Introduction to Augmented Reality (AR) Course

Hello and welcome to the augmented reality training course with applications in Robotics. New augmented reality (AR) training course In this course, we'll cover a variety of new topics, such as using images to display models and calling functions, using images to display models in augmented reality surface detection and how to use it to display robot models surface detection to display robot models in augmented reality or waypoints for robot pathfinding, waypoints for robot pathfinding in augmented reality and using the OpenCV library for object detection, object detection in augmented reality using the OpenCV library color detection, color detection…
Read More
Vector Home Robot

Vector Home Robot

Vector from Digital Dream Labs (DDL) is a home robot that can tell you the weather, time your dinner, take photos, react to your touch, or even carry you to bed. Actually, not really. The last part was a joke. Vector home robot Through different embedded sensors, Vector explores and interacts with its environment, recognizes objects, and avoids obstacles. Thanks to the Vision Intelligence 200 Platform's powerful image processing and machine learning, the Vector can navigate autonomously and detect objects and sounds. There are also touch sensors, four microphones arranged in an array to detect sounds and recognize natural speech,…
Read More
Robot Mimics Human Expressions

Robot Mimics Human Expressions

Up to now, you've probably seen Ameca from Engineered Arts, a humanoid robot that looks too human and can interact and react like a human using a human-like artificial body (AB) along with a human-like artificial intelligence (AI). The Ameca robot from Engineered Arts can mimic human expressions and emotions. Engineered Arts released a new video this week that showcases Ameca's ability to mimic human expressions. An iPhone 12 with AR Kit is used for facial motion capture and the action is mapped to Ameca's face in real-time. I promise this isn't a sci-fi movie, this is a real robot…
Read More
Autonomous Navigation Mobile Robot Using ROS Without Using a Pre-saved Map

Autonomous Navigation Mobile Robot Using ROS Without Using a Pre-saved Map

In the previous lesson, we started with autonomous navigation using a pre-saved map, wrote our own code, and created the auto_nav package for our final project. In a nutshell, we navigated autonomously using a pre-saved mapwere introduced to autonomous navigation conceptsran Localization, move_base, and other navigation nodes Autonomous navigation of Turtlebot in ROS In this last lesson of the series of lessons on ROS tutorials, we will navigate autonomously without using a pre-saved map (actual SLAM)create the auto_nav package (part 2) Autonomous navigation of Turtlebot without a pre-saved map (actual SLAM) Let's also have a summary of the whole course:…
Read More
VR Robotics Simulator: Professor-Student Accesses

VR Robotics Simulator: Professor-Student Accesses

In the previous lesson, we shared actions and the impact of actions of robots in the network such as grabbing an object and releasing it in a multiplayer scene. Sharing actions in the network in unity This lesson is about professor-student accesses in virtual reality, and we will create some accesses for the professor, such as robot selection for the scene and limitations in robot controls. We will conclude our course with this lesson. professor-student accesses in virtual reality The contents of the entire course on developing a VR Robotics Simulator are as follows: Introduction to the Course (1/15)Install VR…
Read More