dc.contributor.author | ENACHI, Andrei | |
dc.contributor.author | TURCU, Cornel | |
dc.contributor.author | CULEA, George | |
dc.contributor.author | BANU, Ioan-Viorel | |
dc.contributor.author | ANDRIOAIA, Dragos-Alexandru | |
dc.contributor.author | PETRU, Puiu-Gabriel | |
dc.contributor.author | POPA, Sorin-Eugen | |
dc.date.accessioned | 2022-12-28T11:35:38Z | |
dc.date.available | 2022-12-28T11:35:38Z | |
dc.date.issued | 2022 | |
dc.identifier.citation | ENACHI, Andrei, TURCU, Cornel, CULEA, George et al. Human Motion Recognition Using Artificial Intelligence Techniques. In: Electronics, Communications and Computing (IC ECCO-2022): 12th intern. conf., 20-21 Oct. 2022, Chişinău, Republica Moldova: conf. proc., Chişinău, 2022, pp. 200-202. | en_US |
dc.identifier.uri | https://doi.org/10.52326/ic-ecco.2022/CS.11 | |
dc.identifier.uri | http://repository.utm.md/handle/5014/21857 | |
dc.description.abstract | The goal of this paper's research is to develop learning methods that promote the automatic analysis and interpretation of human and mime-gestural movement from various perspectives and using various data sources images, video, depth, mocap data, audio, and inertial sensors, for example. Deep neural models are used as well as supervised classification and semi-supervised feature learning modeling temporal dependencies, and their effectiveness in a set of tasks that are fundamental, such as detection, classification, and parameter estimation, is demonstrated as well as user verification. A method for identifying and classifying human actions and gestures based on utilizing multi-dimensional and multi-modal deep learning from visual signals (for example, live stream, depth, and motion - based data). A training strategy that uses, first, individual modalities must be carefully initialized, followed by gradual fusion (called ModDrop) to learn correlations between modalities while preserving the uniqueness of each modality specific representation. In addition, the suggested ModDrop training approach assures that the classifier detect has weak inputs for one or maybe more channels, enabling these to make valid predictions from any amount of data points accessible modalities. In this paper, inertial sensors (such as accelerometers and gyroscopes) embedded in mobile devices collect data are also used. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Technical University of Moldova | en_US |
dc.rights | Attribution-NonCommercial-NoDerivs 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/us/ | * |
dc.subject | deep learning | en_US |
dc.subject | neural models | en_US |
dc.subject | sensors | en_US |
dc.subject | human movement | en_US |
dc.subject | mime-gestural movement | en_US |
dc.subject | automatic analysis | en_US |
dc.title | Human Motion Recognition Using Artificial Intelligence Techniques | en_US |
dc.type | Article | en_US |
The following license files are associated with this item: