Torque Robot Unsupervised System

University of Arizona Capstone Project

Team Lead

Tucson, AZ

Sept. 2020 to May 2021

Overview

Project Sponsore

Raytheon Technologies, Missiles and Defense

Background

At Raytheon, installing fasteners is currently a manual process. When parts are assembled it requires that each fastener be installed in a specific order and torqued to a specific value. Assemblies can consist of hundreds of fasteners requiring multiple personnel several hours to install.

For example, a single joint consisting of 90 fasteners typically takes a team 4 operators 2 hours to complete.

Significance of Project

Produce a system a that completes the task of torquing fasteners in less time while requiring a fraction of the labor while improving overall quality control. 

Implementation of our design solutions are estimated to reduce production costs by $450K annually.

Key System Requirements

  • The system shall accommodate typical fastener positional tolerance (true position) up to Ǿ0.050″ (Ǿ1.27mm)
  • The system shall be capable of applying between 50 IN-LBS and 360 IN-LBS
  • The applied Torque shall be accurate to ±10% of the user specified value

Key System Components

  • 3-axis gantry rail system (similar to a CNC router) actuated using 4 stepper motors to produce motion in three degrees of freedom
  • Robust Graphical User Interface (GUI)—providing assembly parameter file upload, plate calibration module, operations version control,  dry run capabilities, simulation graphics, and more
  • Proportional Integral (PI) closed-loop control system for high torque application accuracy 
  • Positional accuracy achieved using machine vision to complete feedback control loop—allowing unsupervised motion adjustments in real time

Block Diagrams

Software

Video Overview

This video was created for our Final Acceptance Review (FAR) presentation. The video is intended to show verification for the majority of our demonstration requirements associated with our system software.

The video provides a deeper dive into the Graphical User Interface operations. It also cover some bonus features incorporated into the design. However, it might appear to be a bit out of context since its primary intent is to compliment our FAR presentation.

Machine Vision

Overview

  • Machine vision is implemented using two Allied Vision Alvium cameras each with their own illumination rings
  • Each camera was responsible for alignment in their respective lateral direction (i.e., one camera is used to align in the X-direction and the other is aligns the Y-direction)
  • A Haar Cascade Classifier network was trained to detect and localize a fastener in the field of view and is a computationally efficient way to align with a fastener accurate to ~1.0 mm
  • High accuracy alignment is achieved by running the imaging through several convolutional operations to ultimately extract key geometric data used to obtain three centerline approximations
  • The offset in the X and Y directions are determined and the gantry shall move toward alignment until reaching the desired threshold

Torque Control

Main Torque Requirements

  • The torque mechanism shall be capable of applying a torque between 50 IN-LBS and 360 IN-LBS
  • The applied torque shall be accurate to ±10% of the user specified value

Verification Method

  • Use Raytheon’s Metrology Department to calibrate our reaction style torque sensor
  • Verify accuracy of the torque sensor and calibration fit by comparing the determined torque sensor compared to the actual metrology determined value
  • Test accuracy of fully autonomous toque process across the entire torque range

Calibration Curve

Achieved strong linear fit with a R-squared value of 0.999

Final Results

Max. Error: 6.780%

Mean Error: 2.220%

Median Error: 2.048%