We are researching novel technologies based on artificial intelligence (AI) such as machine learning and neural network, for smartly generating, editing, and analyzing visual graphical contents such as animations or illustrations. With our technologies, a novel virtual world can be emerged in a richer manner.

Member

Faculty

Doctor

  • Haruka Takahashi

Graduate

  • Kashiko Aoyama
  • Shintaro Koketsu
  • Tetsuya Shuyama
  • Tsubasa Wakaiki
  • Ka Yokuhan
  • Eya Khedher (International course)
  • Chihiro Tone (IMLEX course)
  • Norimichi Soga
  • Ryusei Nishikawa

Undergraduate

  • Issei Adachi
  • Isamu Sato
  • Taisei Mimura
  • Pham Thuy Tien

Secretary

  • Hiroko Yokoyama

OB/OG

Former Faculty

  • Yuki Endo (University of Tsukuba)
  • Takao Jinno (Osaka Institute of Technology)
  • Tomohiko Mukai (Tokyo Metropolitan University)
  • Toyohisa Kaneko (Honorary Professor)
  • Kimiya Aoki (Chukyo University)
  • Koichi Hirota (The University of Elector-Communications)

Alumni of Academic Potision

  • Takeshi Saitoh (Kyushu Institute of Technology)
  • Tomohiko Mukai (Tokyo Metropolitan University)
  • Yuya Iwakiri (National Institute of Technology, Hiroshima College)
  • Yohei Iwasaki (National Institute of Technology, Kochi College)

Visiting Professors / Research Students

  • Yohei Iwasaki (NIT, Kochi College)
  • Keiichi Sato (NIT, Hakodate College)
  • Janaka Rajapakse (Tainan National University of the Arts)
  • Yi Li (LAAS-CNRS)
  • Xiuzhuo Wang
  • Heekyung Kim
  • Jae-sung Hong

Research Projects

Digital human modeling and control using machine learning

Virtual human is composed by using modern computer graphics/vision technologies for constructing visual agents or digital doubles, and a recent research trend rapidly shifts to the integration with machine learning methodologies such as deep neural networks.

Style retarget of gestural motions

This project develops the technology for enhancing the expressiveness of gestures performed by virtual characters (avatars), via synthesis or retargeting of stylized motions in realtime.
This project is supported by Grants-in-Aid for Scientific Research (B).

Feature space learning for interactive controls of full-body animations

For relating the operation of the input device to the complicated body movement, it is necessary to match the features and structure between the operation and the movement while adapting it to the user's cognitive model. This project will develop a learning mechanism for automatically acquiring a feature space that can intuitively manipulate complex movements through interactive teaching.
This project is supported by Grants-in-Aid for Exploratory Research.

Model-free digital human contents

Digital human is composed using modern computer graphics/vision technologies to construct visual agents or digital doubles. A recent research trend rapidly shifts to the integration with machine learning methodologies such as deep neural networks.
The 3D geometric model of digital humans requires knowledge about geometric representation based on meshes. The animation model also requires managing rotational and skeletal representations and skin deformations. Such model-based technologies make the barrier for ordinary users to create their content from scratch.
On the other hand, the state-of-the-art deep neural network can convert visible human information from captured images to realistic syntheses in an end-to-end manner. Such model-free content brings a paradigm shift to digital human creation by enhancing accessibility.
Computer Graphics and vision technologies for human models are re-invented by introducing various machine learning to integrate 2D and 3D models seamlessly. Such flexible integration can supply new visual experiences in a virtual environment.


Past Projects and Publications

Informatics for Illustrative Images

Style-Based Retrieval of Illustrations

Retrieval, classification, and ranking methodologies based on the drawing styles of illustrative images. The intuitive and automatic conversions and transformations of style are also investigated for both raster and vector images.

Keynote speech: Style-Based Content Exploration for Aesthetic Media Informatics (VRCAI 2015)
An Unsupervised Approach for Comparing Styles of Illustrations, CBMI2015 [Best Paper Award]

From illustrative image to optical signals

Smart image conversion technique is developed by using color mapping based on color enhancement method and perceptual experiment.

Perceptual Color Enhancement for LED Illuminations

Font image conversion for machine embroidery

Smart image conversion for embroidery of Kanji-character is developed using SOTA of DNN technologies, which tries to imitate the aesthetic expressions created by skilled craftsman.

Perceptual metric for down-scaled illustrations

New perceptual metric is developed for estimating image quality of down-scaled illustrative images, which is useful in obtaining icons or pixel arts by down-sampling illustrative images.


Smart Lighting Controls & Color Estimation

Estimation of lighting colors

An image analysis technique was developed for estimating the colors of multiple lighting illuminating the scenes from a single image, by introducing color-line theory.

Estimation of Multiple Illuminant Colors Using Color Lines of Single Image

Lighting controls for impressive portrait

The image-based monitoring and controls with mobile lighting robot is developed, by analyzing the lighting conditions from a portrait image.

Shade Analysis on Facial Images for Robotic Lighting

Restoration of lighting colors

An image synthesis technique was developed for authentically restoring the color of indoor scenes illuminated with color lighting systems. HDR images are smartly converted on the basis of the color appearance model of human vision system.

Restoration of color appearance by combining local adaptations for HDR images, AIC2015

Image-based multiple lighting controls with a Web interface

Smart lighting methodologies was developed with sensor-attached digitally-controllable color LED unit. The smart and energy-saving dimming control is implemented by introducing an intrinsic image analysis and a Web-based control interface.


Visual Code & Optical Image Communications

Ubiquitous communications via optical spatio-temporal data processing

The pattern recognition technique for optical variations along space and time is explored with image sensors of commercial smartphones or handy mobile devices such as tablets or smart watches. Data communication systems with color LEDs are developed as an application of this optical pattern recognitions.

Data-Embeddable Texture Synthesis

SmartGraphics 2007

A method of synthesizing texture images was developed for embedding arbitrary data. It introduces the smart techniques of generating repetitive texture patterns through feature learning of a sample image. A synthesized image can effectively conceal the embedded pattern, and the pattern can be robustly detected from a photographed image of mobile devices such as smart phones and tablets.
Texture Synthesis for Mobile Data Communications, IEEE Computer Graphics and Applications


Humanoid Animations

Smart Skin Deformations

We have developed an efficient method of deforming human skin by skeletal motions, by introducing state-of-the-art sparsity model called Nuclear Norm minimization.
This project utilizes this model for interactive classification and synthesis of motion data.
Efficient Dynamic Skinning with Low-Rank Helper Bone Controllers, ACM Transactions on Graphics (SIGGRAPH 2016)

Pose-Timeline for Propagating Motion Edits

Symposium on Computer Animation 2009

A motion editing interface was developed for efficiently and flexibly editing the sequence of iterative actions by a few intuitive operations. It can visualize a motion sequence on a summary timeline with editable pose-icons, and drag-and-drop operations on the timeline enable intuitive controls of temporal properties of the motion such as timing, duration, and coordination.
Pose-Timeline for Propagating Motion Edits, Symposium on Computer Animation (SCA 2009)
keyword: pose-timeline, edit propagation, motion re-timing, motion style transfer

Multilinear Motion Synthesis with Level of Detail Controls

Pacific Graphics 2007

A hybrid algorithm was developed to optimize the reduction size and computational time, according to the distance from the camera while maintaining visual quality. It can provide a practical tool for creating an interactive animation of many characters while ensuring accurate and flexible controls at a modest level of computational cost.
Multilinear motion synthesis with level-of-detail controls, Pacific Graphics 2007
keyword: motion interpolation, multilinear analysis, level-of-detail control

Geostatistical Motion Interpolation

SIGGRAPH 2005

Motion interpolations were developed with statistical predictions of missing data in an arbitrarily definable parametric space. A practical technique of geostatistics, called universal kriging, was introduced for statistically estimating the correlations between the dissimilarity of motions and the distance in the parametric space. It can statistically optimize interpolation kernels for given parameters at each frame, using a pose distance metric to efficiently analyze the correlation.
Geostatistical Motion Interpolation, ACM Transactions on Graphics (SIGGRAPH 2005)
SIGGRAPH demo movie
keyword: motion interpolation, geostatistics, kriging, variogram

Autonomous, Explorative Motion Generation

IEICE 2004

Explorative synthesis of human motions was developed using hierarchical re-inforcemant learning for searching plausible poses with end-effector's constraints.

Extensive and Efficient Search of Human Movements with Hierarchical Reinforcement Learning, Computer Animation 2002 (CA 2002)

keyword: keyframe animation, hierarchical reinforcement learning

Neural Gait Generator

IEICE 2004

Coupled neural cell models was introduced for generating oscillation signals to synthesize various gait motions that are self-stabilizable for dynamical environment.

Physiological Gaits Controls with a Neural Pattern Generator, The Journal of Visualization and Computer Animation

keyword: neural oscillator, central pattern generator (CPG)


Retrieval of Motion Capture Data

Smart Manipulations of Massive Motion Data

SCA 2004

Self-organizing map was introduced to visualize massive motion capture data while interactively retrieving motion clips with this map.

Motion Map: Image-based Retrieval and Segmentation of Motion Data, Symposium on Computer Animation (SCA2004)

keyword: motion clip retrieval, self organizing map (SOM)

Generating Concise Rules for Retrieving Human Motions from Large Datasets

Computer Animation and Social Agents 2009

A method for retrieving human motion data was developed with concise retrieval rules based on the spatio-temporal features of motion appearance. It converts motion clip into a form of clause language that represents geometrical relations between body parts and their temporal relationship. A retrieval rule is then learned from the set of manually classified examples using inductive logic programming.
Generating Concise Rules for Retrieving Human Motions from Large Datasets, Computer Animation and Social Agents (CASA2009)
keyword: motion retrieval, inductive logic programming, appearance feature


Behavior Simulations

Psychology-based Crowd Simulation

SCA 2004

A gait simulation was developed based on psychological model of personal space and group intelligence for very crowded environment.

Psychological Model for Animating Crowded, Computer Animation and Virtual Worlds (Special issue CASA 2005)

keyword: crowded pedestrian, psychological model, personal space, virtual memory, locomotion graph

Web-based Behavior Simulation

SIGGRAPH 2003 Web Graphics

A Web-based system for simulating human behaviors was developed with motion capture data and XML-based scenario of behaviors. Java-based middleware was developed with the financial support by Mitou software project of IPA.

Extensible Task Simulation with Motion Archive, Transactions on Information and Systems of IEICE
Extensible Behavior Simulation with Motion Archive, SIGGRAPH2003 Web Graphics

keyword: web3D, xml, simulation middleware