{"id":1298,"date":"2018-11-19T13:43:04","date_gmt":"2018-11-19T05:43:04","guid":{"rendered":"http:\/\/people.utm.my\/ajune\/?p=1298"},"modified":"2018-11-19T13:45:32","modified_gmt":"2018-11-19T05:45:32","slug":"collision-detection-with-leap-motion","status":"publish","type":"post","link":"https:\/\/people.utm.my\/ajune\/2018\/11\/19\/collision-detection-with-leap-motion\/","title":{"rendered":"Collision Detection with Visual Feedback using Leap Motion"},"content":{"rendered":"<p class=\"selectionShareable\">How to make the penetration of the virtual surface more logical and more realistic through visual feedback?<\/p>\n<p class=\"selectionShareable\">To answer this question, we tried three methods\u2014highlighting the boundaries and depths as they traverse the grid, adding color gradients to the fingertips as they approach interactive objects and UI elements, and unpredictable grabs. Make a responsive prompt.\u00a0But first let&#8217;s look at how Leap Motion&#8217;s interaction engine handles the interaction between hands and objects.<\/p>\n<p><a href=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/standard-clipping.gif\"><img fetchpriority=\"high\" decoding=\"async\" class=\"aligncenter size-full wp-image-1299\" src=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/standard-clipping.gif\" alt=\"\" width=\"400\" height=\"267\" \/><\/a><\/p>\n<p>In a virtual world, this type of visual cropping occurs whether you touch a stationary surface (such as a wall) or touch an interactive object.\u00a0The two core functions of the\u00a0Leap Motion libraries\u2014flip and grab\u2014have almost always the case where the user&#8217;s hand penetrates the interactive object.<\/p>\n<p><a href=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/interaction-engine-standard-clipping-and-collisions.gif\"><img decoding=\"async\" class=\"aligncenter size-full wp-image-1300\" src=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/interaction-engine-standard-clipping-and-collisions.gif\" alt=\"\" width=\"480\" height=\"270\" \/><\/a><\/p>\n<p>Similarly, when interacting with a physics-based user interface (such as the <em>InteractionButtons<\/em> of the Leap Motion interaction engine, which is reduced in Z space), because the elements of the UI reach the end of their travel distance, the fingertips still have a little bit through Interactive object.<\/p>\n<p><a href=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/interaction-button-clipping.gif\"><img decoding=\"async\" class=\"aligncenter size-full wp-image-1301\" src=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/interaction-button-clipping.gif\" alt=\"\" width=\"480\" height=\"270\" \/><\/a><\/p>\n<h4>Experiment #1: Highlighting the boundary and depth while crossing the grid<\/h4>\n<p class=\"selectionShareable\">In our first experiment, we proposed that when the hand intersects with other meshes, the boundary should be visible.\u00a0The shallow part of the hand is still visible, but the color or transparency of the hand changes.<\/p>\n<p class=\"selectionShareable\">To achieve this, we applied a shader to our hand grid.\u00a0We measure the distance of each pixel on the hand from the camera and compare it to the depth of the scene (from the depth texture to the camera reading).\u00a0If the two values \u200b\u200bare relatively close, we cause the pixels to illuminate and increase the illuminance as we get closer.<\/p>\n<p><a href=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/intersection-shader.gif\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-1306\" src=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/intersection-shader.gif\" alt=\"\" width=\"480\" height=\"293\" \/><\/a><\/p>\n<p class=\"selectionShareable\">When the intensity and depth of illumination are reduced to a minimum, it seems to be an effect that can be applied universally in an application without appearing particularly eye-catching.<\/p>\n<h4>Lab #2: Add a color gradient to your fingertips when approaching interactive objects and UI elements<\/h4>\n<p class=\"selectionShareable\">In the second experiment, we decided to change the color of the fingertips to match the surface color of the object we want to interact with.\u00a0The closer the hand is to the touch object, the closer the colors are.\u00a0This will help the user to more easily determine the distance between the fingertip and the surface of the object while reducing the likelihood that the fingertip will penetrate the surface.\u00a0In addition, even if the fingertip does penetrate the mesh, the resulting visual cropping will not be as awkward \u2013 because the fingertip and the object surface will be the same color.<\/p>\n<p><a href=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/finger-gradients-buttons.gif\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-1305\" src=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/finger-gradients-buttons.gif\" alt=\"\" width=\"584\" height=\"377\" \/><\/a><\/p>\n<p>Whenever we hover over an InteractionObject, we check the distance from each fingertip to the surface of the object.\u00a0Then, we use this data to drive a gradient change that affects the color of each fingertip independently.<\/p>\n<p>This experiment really helps us to more accurately determine the distance between our fingertips and the surface of the object.\u00a0In addition, it makes it easier for us to know the closest contact we have.\u00a0Combine it with the effect of Experiment #1 to make the various phases of the interaction (close, contact, intersect, grip) clearer.<\/p>\n<p><a href=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/fingertip-gradient.gif\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-1302\" src=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/fingertip-gradient.gif\" alt=\"\" width=\"496\" height=\"306\" \/><\/a><\/p>\n<h4>Lab #3: Responsive prompting for unpredictable crawls<\/h4>\n<p class=\"selectionShareable\">How to capture virtual objects in VR?\u00a0You may catch a fist, or pinch it, or fasten the object.\u00a0Previously, we have tried to design some tips\u2014for example, the handles\u2014and hopefully these instructions will guide the user how to grab objects.<\/p>\n<p><a href=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/visual-affordance-vr.png\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-1307 alignleft\" src=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/visual-affordance-vr.png\" alt=\"\" width=\"350\" height=\"195\" srcset=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/visual-affordance-vr.png 694w, https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/visual-affordance-vr-300x167.png 300w\" sizes=\"(max-width: 350px) 100vw, 350px\" \/><\/a><\/p>\n<p>By creating a Raycast on each finger joint and checking its projection position on the InteractionObject, we create a shallow socket grid at the hit point of the ray projection.\u00a0We align the shallow fossa with the hit point normal and use Raycast&#8217;s hit distance \u2013 mainly the depth of the finger inside the object \u2013 to drive the Blendshape to extend the dimple.<\/p>\n<p>From this concept, we further divergence. Do you want to predict the proximity of your hand before your hand touches the surface of the object, and reflect it?\u00a0To do this, we increase the length of the fingertip ray projection so that the hit is registered before your finger touches the surface.\u00a0Then, we create a two-part prefab consisting of a (1) circular mesh and (2) a cylindrical mesh with a depth mask that stops rendering any pixels behind it.<\/p>\n<p><a href=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/acme-raycast.gif\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-1309\" src=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/acme-raycast.gif\" alt=\"\" width=\"600\" height=\"337\" \/><\/a><\/p>\n<p class=\"selectionShareable\">We also tried adding a fingertip color gradient.\u00a0But this time, the gradient is not driven by the proximity of the object, but by the depth of the finger into the object.<\/p>\n<p class=\"selectionShareable\"><a href=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/dimple-mesh-and-fingertips.gif\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-1310\" src=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/dimple-mesh-and-fingertips.gif\" alt=\"\" width=\"400\" height=\"273\" \/><\/a><\/p>\n<p>By setting the layer so that the depth mask does not render the mesh of the <em>InteractionObject<\/em>, but instead renders the user&#8217;s hand mesh. These effects make the crawler feel more coherent, as if our fingers were invited to cross the grid.\u00a0Obviously, this approach requires a more complex system to handle objects outside the sphere \u2013 for the palm of the hand, and when the fingers are close together.<\/p>\n<p><a href=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/acme-sphere.gif\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-1311\" src=\"https:\/\/people.utm.my\/ajune\/wp-content\/uploads\/sites\/987\/2018\/11\/acme-sphere.gif\" alt=\"\" width=\"400\" height=\"267\" \/><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>How to make the penetration of the virtual surface more logical and more realistic through visual feedback? To answer this question, we tried three methods\u2014highlighting the boundaries and depths as they traverse the grid, adding color gradients to the fingertips as they approach interactive objects and UI elements, and unpredictable grabs. Make a responsive prompt.\u00a0But [&hellip;]<\/p>\n","protected":false},"author":542,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1298","post","type-post","status-publish","format-standard","hentry","category-augmented-reality","entry"],"_links":{"self":[{"href":"https:\/\/people.utm.my\/ajune\/wp-json\/wp\/v2\/posts\/1298","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/people.utm.my\/ajune\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/people.utm.my\/ajune\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/people.utm.my\/ajune\/wp-json\/wp\/v2\/users\/542"}],"replies":[{"embeddable":true,"href":"https:\/\/people.utm.my\/ajune\/wp-json\/wp\/v2\/comments?post=1298"}],"version-history":[{"count":3,"href":"https:\/\/people.utm.my\/ajune\/wp-json\/wp\/v2\/posts\/1298\/revisions"}],"predecessor-version":[{"id":1315,"href":"https:\/\/people.utm.my\/ajune\/wp-json\/wp\/v2\/posts\/1298\/revisions\/1315"}],"wp:attachment":[{"href":"https:\/\/people.utm.my\/ajune\/wp-json\/wp\/v2\/media?parent=1298"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/people.utm.my\/ajune\/wp-json\/wp\/v2\/categories?post=1298"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/people.utm.my\/ajune\/wp-json\/wp\/v2\/tags?post=1298"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}