Research

Current research

Computer Graphics

Meshing

Adaptive surface mesh generation

Triangular and tetrahedral mesh generation

Quadrilateral and hexahedral mesh generation

Optimization of simplex meshes

Rendering

Point cloud rendering using splats

Physically based rendering

 

Virtual Reality

Virtual Environments

Generation of virtual characters using biological reproduction models

Manipulation of facial expressions in virtual characters

Facial aging for virtual characters

Clothing for virtual characters

Collaborative virtual environments

Virtual shot training

Artificial Life

Generation of behaviour for autonomous virtual characters

 

Computer Animation

Motion Control

Animation of virtual characters using modes of vibration

Flexible reactions of captured movements

Crowd Simulation

Human behaviour for crowd simulation

Crowd simulation in emergency situations

Crowd rendering

 

Other research

Mesh refinement using graphical processors

Exact form factor calculation for radiosity using CSG

Augmented reality based games

Efficient and adaptive graphical engine

SIMFRA - Simulation of fractures in ducts and pavements

AVAL - Virtual environment for language learning

Integration of virtual environments with external applications

Animation of articulated structures using the spacetime constraints technique

Meshes are computational data structures used to model physical objects, beings or locations. Such meshes can be two- or three-dimensional or surfaces meshes, on which several animations or simulations can be carried out.

The mesh generation problem deals with the automatic generation of such meshes. In [1], the author classifies mesh generation techiniques in the following (non-exhaustive) categories:

  1. Tri/Tetrahedral meshing
    1. Octree
    2. Delaunay
      1. Point insertion
      2. Boundary constrained triangulation
    3. Advancing front
  2. Quad/Hexahedral meshing
    1. Mapped meshing
    2. Unstructured quad meshing
      1. Indirect methods
      2. Direct methods
        1. Quad meshing by decomposition
        2. Advancing front quad meshing
    3. Unstructured hex meshing
      1. Indirect methods
      2. Direct methods
        1. Grid-based
        2. Medial surface
        3. Plastering
        4. Whisker weaving
    4. Hex-dominant methods
  3. Surface meshing
    1. Parametric space
    2. Direct 3D

There are still techniques to enhance the quality of an automatically generated mesh:

  1. Mesh post-processing
    1. Smoothing
      1. Averaging methods
      2. Optimization-based methods
      3. Physically-based methods
      4. Mid-node placement
    2. Cleanup
      1. Shape improvement
      2. Topological improvement
    3. Refinement
      1. Triangle/Tetrahedral refinement
        1. Edge bisection
        2. Point insertion
        3. Templates
      2. Quad/Hex refinement

The CRAb group works in all these research fields of mesh generation. In particular, our current work is related to:

  1. Adaptive surface mesh generation
  2. Triangular and tetrahedral mesh generation
    1. Serial advancing front technique for models with cracks
    2. Parallel advancing front technique for models with cracks
  3. Quadrilateral and hexahedral mesh generation
  4. Optimization of simplex meshes

More information about mesh generation can be found in:

[1] "A Survey of Unstructured Mesh Generation Technology", Proceedings of the 7th International Meshing Roundtable, Sandia National Lab, pp.239-267, October 1998, http://www.imr.sandia.gov/papers/authors/owen.html.
[2] "Basic Structured Grid Generation - With an Introduction to Unstructured Grid Generation", M. Farrashkhalvat and J. P. Miles, Butterworth-Heinemann, 2003, http://www.sciencedirect.com/science/book/9780750650588.
[3] "Triangulation and Applications - Mathematics and Visualizations", O. Hjelle and M. Daelen, Springer, 2006, http://www.springer.com/mathematics/computational+science+%26+engineering/book/978-3-540-33260-2.
[4] "Mesh Generation", P. J. Frey and P. L. George, 2nd ed, ISTE Publishing and John Wiley & Sons, 2008, http://www.wiley.com/WileyCDA/WileyTitle/productCd-1848210299.html.

The Virtual Environment research in our group can be subdivided in two areas: virtual environments and virtual characters to populate these virtual environments.

Virtual Environments

The research in virtual environments involves the development of collaborative virtual environments and virtual environments for shooting training.

Virtual Characters

Developing virtual characters requires a great deal of tedious work. This is why many systems offer only a few options of distinct corporal characteristics for the user to compose their customized character. Usually, the user is able to select different types of hair, skin colour and clothes and to apply some type of scale transformation to change the size of the character (tall or short, fat or skinny). In many desktop Networked Virtual Reality applications, some virtual characters represent the user inside the virtual environment, while other characters under the control of the system play specific roles in the environment. Often, the number of characters the user can select is very limited. Therefore, it seems to be desirable that the variability of characteristics of these models be close to the variability found in human populations.

Generation of virtual characters using biological reproduction models

A number of techniques for generating geometric models of human head and body are in use nowadays. Models of human characters are useful in computer games, animations, virtual reality and many other applications. The complexities involved in generating such models, however, impose heavy limitations on the variety, quality and definitions of the characteristics of the characters produced. In our research, diploid reproduction is mimicked to produce an unlimited number of character models, which inherit traits from two parent models. Thus, the technique consist of distributing pre-selected characteristics, represented as control parameters, over a pre-determined number of chromosome pairs for both parents; followed by a simulated generation of the father's and the mother's gametes; which are randomly combined in a simulated fecundation. The diversity is ensured in four random processes: the random exchange of segments during crossover; the random alignment of homologous chromosomes at metaphases I and II os meiosis; and the random union of male and female gametes during fecundation.

Manipulation of facial expressioins in virtual characters

Virtual tridimensional creatures are active actors in many types of applications nowadays, such as virtual reality, games and computer animation. The virtual actors encountered in those applications are very diverse, but usually have human-like behavior and facial expressions. This paper deals with the mapping of facial expressions between virtual characters, based on anthropometric proportions and geometric manipulations by influence zones. Facial proportions of a base model is used to transfer expressions to any other model with similar global characteristics (if the base model is a human, for instance, the other models need to have two eyes, one nose and one mouth). With this solution, it is possible to insert new virtual characters in real-time applications without having to go through the tedious process of customizing the characters' emotions.

Facial aging for virtual characters

Computer Graphics has applications in different fields of knowledge such as Engineering, Science, Arts Entertainment and Education. The proposed methodology uses simulated diploid reproduction of virtual characters carefully modeled taking into account the traits of the missing person’s parents. The genetic characteristics of both parents are stored into their genomic data structure, which will be used to construct pools of male and female gametes to be used in a simulated fecundation. The descendants are generated with the same age of the missing person at the time of disappearance.

Through an interactive process, a plausible model of the missing person is selected among the generated descendants and its genomic data structure is saved. The parents’ models and corresponding data structures are updated to reflect the age of the missing person at search time. Next, the genomic data structure of the missing person is updated with the information contained in the updated data structure of the parents, and an updated model of the missing person is generated. This updated model is a plausible model, upon which perturbations can be applied to generate several plausible variants.

Clothing for virtual characters

TODO Rendering page.

TODO Artificial Life page.