Grupos Investigadores

Líneas de Investigación

  • Control dimensional de piezas a temperaturas elevadas
  • Desarrollo de dispositivos hápticos para operaciones de montaje industrial
  • Desarrollo de sistemas drive-by-wire con feedback háptico
  • Generación de gemelos virtuales de elementos de plantas industriales
  • Realidad aumentada para guiado de operarios en maniobras o procesos de mantenimiento
  • Robótica colaborativa para intervenciones quirúrgicas
  • Simuladores de procesos industriales para evaluación de alternativas
  • Visión artificial de alta precisión para medición de engranajes

Palabras Clave

  • automática
  • háptica
  • mecatrónica
  • realidad aumentada
  • realidad virtual
  • robótica
  • visión artificial

Publicaciones Científicas desde 2018

  • Autores: López-de Ipina, K. (Autor de correspondencia); Iradi, J.; Fernández, E.; et al.
    Revista: SENSORS
    ISSN: 1424-8220 Vol.23 N° 3 2023 págs. 1170
    The workplace is evolving towards scenarios where humans are acquiring a more active and dynamic role alongside increasingly intelligent machines. Moreover, the active population is ageing and consequently emerging risks could appear due to health disorders of workers, which requires intelligent intervention both for production management and workers' support. In this sense, the innovative and smart systems oriented towards monitoring and regulating workers' well-being will become essential. This work presents HUMANISE, a novel proposal of an intelligent system for risk management, oriented to workers suffering from disease conditions. The developed support system is based on Computer Vision, Machine Learning and Intelligent Agents. Results: The system was applied to a two-arm Cobot scenario during a Learning from Demonstration task for collaborative parts transportation, where risk management is critical. In this environment with a worker suffering from a mental disorder, safety is successfully controlled by means of human/robot coordination, and risk levels are managed through the integration of human/robot behaviour models and worker's models based on the workplace model of the World Health Organization. The results show a promising real-time support tool to coordinate and monitoring these scenarios by integrating workers' health information towards a successful risk management strategy for safe industrial Cobot environments.
  • Autores: Borro Yagüez, Diego (Autor de correspondencia); Zachmann, G.; Giannini, F.; et al.
    ISSN: 2673-4192 Vol.3 2022 págs. 968054 - *
  • Autores: Chunab-Rodriguez, M. A.; Santana-Diaz, A.; Rodriguez-Arce, J. (Autor de correspondencia); et al.
    ISSN: 2076-3417 Vol.12 N° 14 2022 págs. 7277
    In recent years, engineering degree programs have become fundamental to the teaching of robotics and incorporate many fundamental STEM concepts. Some authors have proposed different platforms for teaching different topics related to robotics, but most of these platforms are not practical for classroom use. In the case of teaching autonomous navigation algorithms, the absence of platforms in classrooms limits learning because students are unable to perform practice activities or cannot evaluate and compare different navigation algorithms. The main contribution of this study is the implementation of a free platform for teaching autonomous-driving algorithms based on the Robot Operating System without the use of a physical robot. The authors present a case study using this platform as a teaching tool for instruction in two undergraduate robotic courses. Students evaluated the platform quantitatively and qualitatively. Our study demonstrates that professors and students can carry out different tests and compare different navigation algorithms to analyze their performance under the same conditions in class. In addition, the proposed platform provides realistic representations of environments and data visualizations. The results claim that the use of simulations helps students better understand the theoretical concepts, motivates them to pay attention, and increases their confidence.
  • Autores: Díaz Garmendia, Iñaki (Autor de correspondencia); Elizalde, E.; López, S. R.; et al.
    ISSN: 0885-8985 Vol.37 N° 2 2022 págs. 24 - 32
  • Autores: Franceschi, P. (Autor de correspondencia); Mutti, S.; Ottogalli Fernández, Kiara Alexandra; et al.
    ISSN: 0951-192X Vol.35 N° 6 2022 págs. 619 - 632
    With the ongoing Industry 4.0 (I4.0) revolution, plant management and supervision play a key role in the development and (re-)design of industrial plants. In the arising scenarios, the need to coordinate human workers and autonomous systems, sharing the same environment, teaming together, becomes a fundamental requirement. Indeed, even though automation in standard assembly lines has reached high efficiency and reliability, for complex and new applications, a certain amount of failure must be considered for future addressing. This paper presents a framework for the flexible coordination of such a complex and heterogeneous cyber-physical system. A digital Ttwin mirrors in real-time the plant system, while a dashboard displays plant status, providing the human operators with fundamental tools for supervision and prompt intervention in case of failure. The framework was developed and tested in an industrially relevant environment, specifically for the assembly of the interior of an aircraft fuselage.
  • Autores: Iparraguirre Gil, Olatz (Autor de correspondencia); Iturbe Olleta, Nagore; Brazalez Guerra, Alfonso; et al.
    ISSN: 1524-9050 Vol.23 N° 11 2022 págs. 22378 - 22385
    The future of autonomous driving is slowly approaching, but there are still many steps to take before it can become a reality. It is crucial to pay attention to road infrastructure, because without it, intelligent vehicles will not be able to operate reliably, and it will never be possible to dispense of driver's control. This paper presents the work carried out for the detection of road markings damage using computer vision techniques. This is a complex task for which there are currently not many papers and large image sets in the literature. This study uses images from the public Road Damage Detection dataset for the D44 defect and also provides 971 new labelled images for Spanish roads. For this purpose, three detectors based on deep learning architectures (Faster RCNN, SDD and EfficientDet) have been used and single-source and mixed-source models have been studied to find the model that best fits the target images. Finally, F1-score values reaching 0.929 and 0.934 have been obtained for Japanese and Spanish images respectively which improve the state-of-the-art results by 25%. It can be concluded that the results of this study are promising, although the collection of many more images will be necessary for the scientific community to continue advancing in the future in this field of research.
  • Autores: Moru, Desmond Kehinde (Autor de correspondencia); Borro Yagüez, Diego
    ISSN: 0268-3768 Vol.114 N° 3 2021 págs. 797 - 809
    Due to changes and movements during a measurement process, the need to have an alignment system becomes imperative, in order to avoid all possible errors that may arise from a lack of alignment. In the effort to obtain the best possible conditions for alignment, it is necessary to check whether the object to be measured is well-positioned. Good alignment reduces down-time and should be part of the quality control process. The aim of this paper is the study of light and object alignments to monitor and achieve an optimal alignment system, in order to eliminate the effects of misalignment. The algorithms were tested with a not-optimal system to ascertain its efficiency. Besides, calibration parameters that have been studied in a previous work have been added to the whole experiment in order to quantify which impact has every single optimization in the measurement error of each stage.
  • Autores: Ottogalli Fernández, Kiara Alexandra (Autor de correspondencia); Rosquete De Mora, Daniel Humberto; Rojo, J.; et al.
    ISSN: 0951-192X Vol.34 N° 9 2021 págs. 975 - 995
    Aeronautics, in the context of industry 4.0, is continuously evolving to respond to the market dynamics and has incorporated automation to many stages of aircraft manufacturing. However, most of the final assembly line processes are still done manually and remain a challenge. Virtual Reality (VR) technologies can be leveraged to study the incorporation of automation systems involving Human-Robot Coexistence (HRC) in assembly processes before the physical system is available, which is beneficial for increasing the productivity and identifying issues beforehand, thus, preventing unexpected costs. In this context, a VR simulation environment was developed with two innovative factors: (1) The possibility to evaluate multiple new automated and semi-automated cabin and cargo processes and select the best one in terms of specific Key Performance Indicators (KPIs) for a future implementation in the physical system and (2) the capability to study the ergonomics of the human worker inside the narrow space of the fuselage while assembling the parts and coexisting with robots, without compromising the worker's safety. The results show that most of the new proposed strategies improve the assembly time, worker cost, or ergonomics of the process, with an investment varying between 100 K and 200 K euros and ROI of 1-2 years.
  • Autores: Iparraguirre Gil, Olatz (Autor de correspondencia); Amundarain Irizar, Aiert; Brazalez Guerra, Alfonso; et al.
    Revista: SENSORS
    ISSN: 1424-8220 Vol.21 N° 4 2021 págs. 1254
    European road safety has improved greatly in recent decades. However, the current numbers are still far away to reach the European Commission's road safety targets. In this context, Cooperative Intelligent Transport Systems (C-ITS) are expected to significantly improve road safety, traffic efficiency and comfort of driving, by helping the driver to make better decisions and adapt to the traffic situation. This paper puts forward two vision-based applications for traffic sign recognition (TSR) and real-time weather alerts, such as for fog-banks. These modules will support operators in road infrastructure maintenance tasks as well as drivers, giving them valuable information via C-ITS messages. Different state-of-the-art methods are analysed using both publicly available datasets (GTSB) as well as our own image databases (Ceit-TSR and Ceit-Foggy). The selected models for TSR implementation are based on Aggregated Chanel Features (ACF) and Convolutional Neural Networks (CNN) that reach more than 90% accuracy in real time. Regarding fog detection, an image feature extraction method on different colour spaces is proposed to differentiate sunny, cloudy and foggy scenes, as well as its visibility level. Both applications are already running in an onboard probe vehicle system.
  • Autores: Catalan, J.; Garcia, J.; Blanco, A.; et al.
    ISSN: 2076-3417 Vol.11 N° 14 2021 págs. 6259
    The present study aims to evaluate the advantages of a master-slave robotic rehabilitation therapy in which the patient is assisted in real-time by a therapist. We have also explored if this type of strategy is applicable in a tele-rehabilitation environment. A pilot study has been carried out involving 10 patients who have performed a point-to-point rehabilitation exercise supported by three assistance modalities: fixed assistance (without therapist interaction), local therapist assistance, and remote therapist assistance in a simulated tele-rehabiliation scenario. The rehabilitation exercise will be performed using an upper-limb rehabilitation robotic device that assists the patients through force fields. The results suggest that the assistance provided by the therapist is better adapted to patient needs than fixed assistance mode. Therefore, it maximizes the patient's level of effort, which is an important aspect to improve the rehabilitation outcomes. We have also seen that in a tele-rehabilitation environment it is more difficult to assess when to assist the patient than locally. However, the assistance suits patients better than the fixed assistance mode.
  • Autores: Moru, D. K. (Autor de correspondencia); Borro Yagüez, Diego
    Revista: MEASUREMENT
    ISSN: 0263-2241 Vol.171 2021 págs. 108750
    Industrial vision highlights a growing trend in industrial systems. As camera sensors become smarter, the quality of data produced increases and it improves the accuracy results. One of the most decisive steps for getting accurate measurements is the calibration process. This paper aims to analyze the effect of four calibration parameters: camera focus, exposure time, calibration plate tilt and number of images, on the calibration accuracy. Endocentric and telecentric lenses are used in the image acquisition and a comparative quality analysis of the calibration result is obtained using statistical methods. A sample of 2176 images is used to generate the population and the calibration error is obtained for the different values of the parameters of interest. To study the influence of each parameter in the calibration error, a multivariable statistical analysis is performed. Statistically significant results were obtained for all parameters, except in the exposure time parameter, leading to the conclusion that the calibration results (and hence the measurement accuracy) can be improved by choosing the appropriate calibration parameters.
  • Autores: Borro Yagüez, Diego (Autor de correspondencia); Suescun Cruces, Ángel; Brazalez Guerra, Alfonso; et al.
    ISSN: 2076-3417 Vol.11 N° 4 2021 págs. 1443
    Featured Application Comparison two digital solutions (tablet based and Augmented Reality based) for bus maintenance against the traditional solution based on paper. This paper shows two developed digital systems as an example of intelligent garage and maintenance that targets the applicability of augmented reality and wearable devices technologies to the maintenance of bus fleets. Both solutions are designed to improve the maintenance process based on verification of tasks checklist. The main contribution of the paper focuses on the implementation of the prototypes in the company's facilities in an operational environment with real users and address the difficulties inherent in the transfer of a technology to a real work environment, such as a mechanical workshop. The experiments have been conducted in real operation thanks to the involvement of the public transport operator DBUS, which operates public transport buses in the city of Donostia-San Sebastian (Spain). Two solutions have been developed and compared against the traditional process: one based on Tablet and another one based on Microsoft HoloLens. The results show objective metrics (Key Performance Indicators, KPI) as well as subjective metrics based on questionnaires comparing the two technological approaches against the traditional one based on manual work and paper.
  • Autores: Amarillo Espitia, Andres (Autor de correspondencia); Sánchez Tapia, Emilio; Caceres, J.; et al.
    ISSN: 1875-4791 Vol.13 N° 6 2021 págs. 1473 - 1484
    The growing introduction of robotics in non-industrial applications where the environment is unstructured and changing, has led to the need of development of safer and more intuitive, human-robot interfaces. In such environments, the use of collaborative robots has potential benefits, due to the combination of user experience, knowledge and flexibility with the robot's accuracy, stiffness and repeatability. Nevertheless, in order to guarantee a functional collaboration in these environments, the interaction between user and robot must be intuitive, natural, fast and easy to use. On one hand, commercial collaborative robots are less accurate and less stiff than the traditional industrial ones, on the other hand, the later have not intuitive interaction interfaces. There are tasks in which the stiffness of industrial robots and the intuitive interaction interfaces of collaborative commercial robots, are desirable. This is the case of some robotic assisted surgical procedures, such as robotic assisted spine surgery, with high accuracy demands and with the need of intuitive surgeon-robot interaction. This paper presents a hand guiding methodology for functional human-robot collaboration and the introduction of novel algorithms to enhance its behavior. Also its implementation on a robotic surgical assistant for spine procedures is presented. It is emphasized how a traditional industrial robot can be used as a collaborative one when the available commercial collaborative robots do not have the required accuracy and stiffness for the task.
  • Autores: Gil Nobajas, Jorge Juan; Díaz Garmendia, Iñaki
    ISSN: 2300-2611 Vol.30 N° 1 2020 págs. 123-138
  • Autores: Gil Nobajas, Jorge Juan; Ugartemendia, Axier; Díaz Garmendia, Iñaki
    ISSN: 2076-3417 Vol.10 N° 24 2020 págs. 8807
    Virtual Reality environments are being used on a mass scale in fields, such as Industry and Medicine. These virtual scenarios serve very different purposes such as prototyping, gaming and exercising. Interaction with the virtual environment is mainly achieved by senses of sight and hearing through devices, such as a mouse or VR glasses. To this end, haptic research started a few decades ago with the aim of improving this interaction through a sense of touch. A key element, hitherto not researched, is the effective combination of virtual elastic, viscous, and inertial effects in haptic feedback restored to the user and the safety implications of these feedback effects. It is of particular importance in neurological rehabilitation exercising, as interaction realism and safety are of great importance in therapy and for the patient. Therefore, this work addresses the stability analysis of the combination of three haptic effects-elastic, viscous, and inertial-and the subjective feeling on the part of users regarding different combinations of these effects. A theoretical analysis is presented with a view to establishing stable control principles, and a user-study was carried out in order to help understand the perception of users to different combinations of haptic effects.
  • Autores: Ugartemendia, A. (Autor de correspondencia); Rosquete De Mora, Daniel Humberto; Gil Nobajas, Jorge Juan; et al.
    ISSN: 1070-9932 Vol.27 N° 2 2020 págs. 78 - 86
    Robotic rehabilitation for poststroke therapies is an emerging new domain of application for robotics with proven success stories and clinical studies. New robotic devices and software applications are hitting the market, with the aim of assisting specialists carrying out physical therapies and even patients exercising at home. Rehabilitation robots are designed to assist patients performing repetitive movements with their hemiparetic limbs to regain motion. A successful robotic device for rehabilitation demands high workspace and force feedback capabilities similar to a human physiotherapist. These desired features are usually achieved at the expense of other important requirements, such as transparency and backdrivability, degrading the overall human-machine interaction experience.
  • Autores: Moru, Desmond Kehinde (Autor de correspondencia); Borro Yagüez, Diego
    ISSN: 0268-3768 Vol.106 N° 43862 2020 págs. 105 - 123
    Quality control has become a priority in the inspection processes of industrial manufacturing of gears. Due to the advancement of technology and the realizations of Industry 4.0, smart factories demand high precision and accuracy in the measurements and inspection of industrial gears. Machine vision technology provides image-based inspection and analysis for such demanding applications. With the use of software, sensors, cameras, and robot guidance, such integrated systems can be realized. The aim of this paper is to deploy an improved machine vision application to determine the precise measurement of industrial gears, at subpixel level, with the potential to improve quality control, reduce downtime, and optimize the inspection process. A machine vision application (Vision2D) has been developed to acquire and analyze captured images to implement the process of measurement and inspection. Firstly, a very minimum calibration error of 0.06 pixel was obtained after calibration. The calibrated vision system was verified by measuring a ground-truth sample gear in a Coordinate Measuring Machine (CMM), using the parameter generated as the nominal value of the outer diameter. A methodical study of the global uncertainty associated with the process is carried out in order to know better the admissible zone for accepting gears. After that, the proposed system analyzed twelve other samples with a nominal tolerance threshold of +/- 0.020 mm. Amongst the gears inspected, the Vision2D application identified eight gears which are accepted and four bad gears which are rejected. The inspection result demonstrates an improvement in the algorithm of the Vision2D system application when compared with the previous existing algorithms.
  • Autores: Gil Nobajas, Jorge Juan (Autor de correspondencia); Díaz Garmendia, Iñaki
    ISSN: 0278-0046 Vol.67 N° 1 2020 págs. 698 - 705
    Haptic devices driven by DC motors are usually controlled in current mode due to the direct relationship between current and torque. This work analyzes the performance of voltage-mode controllers whose main drawback is that the torque of the actuator depends on its electrical dynamics. However, the electrical dynamics of the motor add the viscosity generated by the back-electromotive force. Since the motor damping seen from the handle of the interface is increased by the square of the transmission ratio, the physical damping of the mechanism can be very high, maintaining low inertia. As a result, very high performance in terms of critical stiffness can be achieved, even using cost-effective electronics. There is a trade-off between the achievable virtual stiffness during the haptic interaction and the backdrivability in free movement, if the damping is set too high. To investigate the benefits of this motor control strategy, CEIT's haptic gearshift is used as a testbed. The experimental results confirm that a very high critical stiffness can be achieved using this strategy
  • Autores: Gil Nobajas, Jorge Juan
    ISSN: 978-84-313-3650-9 2021
    En la actualidad muchas empresas se plantean automatizar sistemas o procesos. Automatizar quiere decir cambiar el control manual de un sistema físico por un control automático, es decir, un funcionamiento sin intervención humana. Dentro de la Ingeniería, la Automática es la disciplina que diseña los algoritmos que permiten este cambio de paradigma de control. El ejemplo más habitual de sistema automático es el robot industrial, que opera por sí mismo de acuerdo con unas reglas programadas. En este manual se presentan los conceptos básicos del control automático. Los primeros capítulos se dedican a la modelización matemática de señales y sistemas. A continuación, se estudian diversos aspectos del comportamiento de los sistemas físicos: el régimen transitorio, la estabilidad y el error en régimen permanente. Finalmente, se explican algunas herramientas de diseño de controladores proporcionales (el método de lugar de las raíces y el ajuste usando diagramas de Bode) y controladores PID (principalmente el método de Ziegler-Nichols).

Proyectos desde 2018

  • Título: Métodos y algoritmos para la automatización en la digitalización holística inmersiva de la fábrica. Elkartek 21-22, tipo 1.
    Código de expediente: KK-2022_00065
    Investigador principal: JORGE JUAN GIL NOBAJAS.
    Financiador: GOBIERNO VASCO
    Convocatoria: Programa ELKARTEK 2022 K1: Proyecto de Investigación Fundamental Colaborativa - Investigación Fundamental
    Fecha de inicio: 01-03-2022
    Fecha fin: 31-12-2023
    Importe concedido: 83.478,45€
    Otros fondos: -