News

Oct, 2017: EUROVR association is co-financing an important initiative: the VR Tour. Organized by Laval Virtual, the tour is scheduled to visit the various actors operating in the fields of Virtual and Augmented Reality in all Europe.

Jun 17, 2017: Haptic and hand tracking demos at the Open Campus 2017.

Feb-Apr 2017: David Vilela (Mechanical Engineering Laboratory, University of Coruna, Spain) visited our lab. He is working on benchmarks to compare different intersection calculation methods in collisions, and also different force models.

Feb 2017: G. Zachmann and J. Teuber visited the Mahidol University in Bangkok, Thailand as part of a delegation from the University of Bremen. The goal of the visit was to foster the cooperation between the two universities and lay ground-work for future colaborations.

Jun 2016: Radio Bremen visited our lab to film the works of the Creative Unit "Intra-Operative Information" for a news magazine on the local TV station. Click here for the film at Radio Bremen. And Click here for the same film on our Website.

May 16, 2016: Patrick Lange was honored with the SIGSIM Best PhD Award at the ACM SIGSIM PADS Conference 2016.

Jun 19-21, 2015: G. Zachmann gives invited talk at the DAAD-Stipendiatentreffen in Bremen, Germany.

Jun 2015: Haptic and hand tracking demos at the Open Campus 2015.

Dec 08-10, 2014: ICAT-EGVE 2014 and EuroVR 2014 conferences at the University of Bremen organized by G. Zachmann.

Sep 25-26, 2014: GI VR/AR 2014 conference at the University of Bremen organized by G. Zachmann.

Sep 24-25, 2014: VRIPHYS 2014 conference at the University of Bremen organized by G. Zachmann .

Feb 4, 2014: G. Zachmann gives invited talk on Interaction Metaphors for Collaborative 3D Environments at Learntec.

Jan 2014: G. Zachmann got invited to be a Member of the Review Panel in the Human Brain Project for the Competitive Call for additional project partners

Nov 2013: Invited Talk at the "Cheffrühstück 2013"

Oct 2013: PhD thesis of Rene Weller published in the Springer Series on Touch and Haptic Systems.

Jun 2013: G. Zachmann participated in the Dagstuhl Seminar Virtual Realities (13241)

Jun 2013: Haptic and hand tracking demos at the Open Campus 2013.

Jun 2013: Invited talk at Symposium für Virtualität und Interaktion 2013 in Heidelberg by Rene Weller.

Apr 2013: Rene Weller was honored with the EuroHaptics Ph.D Award at the IEEE World Haptics Conference 2013.

Jan 2013: Talk at the graduation ceremony of the University of Bremen by Rene Weller.

Oct 2012: Invited Talk by G. Zachmann at the DLR VROOS Workshop Servicing im Weltraum -- Interaktive VR-Technologien zum On-Orbit Servicing in Oberpfaffenhofen, Munich, Germany.

Oct 2012: Daniel Mohr earned his doctorate in the field of vision-based pose estimation.

Sept 2012: G. Zachmann: Keynote Talk at ICEC 2012, 11th International Conference on Entertainment Computing.

Sep 2012: "Best Paper Award" at GI VR/AR Workshop in Düsseldorf.

Sep 2012: Rene Weller earned his doctorate in the field of collision detection.

Aug 2012: GI-VRAR-Calendar 2013 is available!

Real-time camera-based 3D hand tracking

Hand tracking has applications in many fields, for example for navigation in virtual environments, virtual prototyping, gesture recognition, and motion capture. The goal of this project is to track the global position and all finger joint angles of a human hand in real-time.

Due to measurement noise, occlusion, cluttered background, inappropriate illumination, high dimensionality, and real-time constraints, hand-tracking is a scientific challenge.

We use multiple cameras to capture images of the hand from different directions. Features like skin segmentation, edge detection, skin texture, and previous hand position can be used to extract the 2D shapes of the hand in the images. We utilize dimension reduction techniques to cope with the high complexity of the tracking problem (the hand has about 21 local DOFs and 6 global DOFs).

The following figure shows the overall architecture of our system.

hand-tracking


Poster

This poster illustrates the main steps of the tracking algorithm.

Publications

Position Papers



Results

Segmentation-Free, Area-Based Articulated Object Tracking

Videos

video
DivX, WMV9

The video shows a visual comparison of a conventional skin segmentation-based similarity computation (left column) with our novel divergence-based (right row). Both similarity measures are combined with a brute force (upper row) and our new smart mode finding approach (bottom row) to detect the best matching position and hand pose. The resuls show that the divergence-based approach outperforms the conventional approach in situations where the skin segmentation does not work well.

FAST: Fast Adaptive Silhouette Area based Template Matching

Videos

video
video
video
MS-Video-1, MOV DivX, WMV9, MOV DivX, WMV9, MOV

The left most video demonstrates the rectangle covering computation algorithm. The middle and right videos demonstrate the matching approach. Of most interest is the bottom left panel that shows the rectangular representation of the templates. We would like to point out that, at matching time, we can adaptively choose the matching accuracy by simply adjusting the number of rectangles per template used to compute the similarity.

Silhouette Area Based Similarity Measure for Template Matching in Constant Time

Videos

video
DivX, WMV9, MOV

The video demonstrates the main steps of our area based similarity measure. The skin segmentation is applied to the input image. As result, we get the skin likelihood map shown on the top right panel. The templates are represented by a very compact data structure consisting of a set of rectangles (bottom left panel). We compute a similarity measure based on the per-pixel joint probability. Utilizing the integral image of the log-likelihood, we are able to compute the similarity measure more then 10 times faster compared to state-of-the-art approaches.

Continuous Edge Gradient-Based Template Matching

Videos

Original image Best matching template determined by our approach superimposed at the hand labeled position
Combined confidence map generated by the chamfer based approach Combined confidence map generated by our approach
video
DivX, WMV9
video
DivX, WMV9
video
DivX, WMV9
The videos demonstrate that our approach generates fewer and much more significant maxima (possible hand positions), which leads to considerably easier true hand position finding.
Note:

The videos demonstrate that our approach generates fewer and much more significant maxima (possible hand positions), which leads to considerably easier true hand position finding.



Skin Segmentation

Original Image
Jones and Rehg
Our Approach
result1o.png
result1o.png
result1s.png
result2o.png
result2o.png
result2s.png
result3o.png
result3o.png
result3s.png
result4o.png
result4o.png
result4s.png
result5o.png
result5o.png
result5s.png

Video

The Video below shows the segmentation algorithm. The Camera is positioned on the rear right side. On the screen the left window shows the captured image from the camera, the right window the segmentation result of our algorithm.
video
DivX, WMV9, MOV