My project has a 2 degrees of freedom arm with a Projector-Camera on the top.
The projector is a 1080p device with manual focus and the camera is a 1080 [login to view URL] device also mounts a depth sensor that can provide distance input at any time for a specific point. The device also mounts a microphone for voice recognition.
They all are connected to a Jetson Nano. The focus motor, Horizontal motor and Vertical Motor are connected through an Arduino.
In order to get a focused image at any time it's required to calibrate the projector focus, so a motor is attached to the wheel that adjusts this.
The horizontal DoF is 360 degrees and the vertical one is 120 with an offset from the bottom of 60 degrees.
The deliverable products are:
a) A software tool that using the camera and/or the depth sensor can focus the projected image at any time
b) A tool that at any given position of the arm can adapt the projected image to the surface so that it is vertical at any time. No playback content or still images but the whole projection
c) A tool that can detect if the surface is projectable or not. Meaning that if there is any disruption of the surface or it's not flat then there is no possible way to get a clean image. Saving the position
d) A tool that can recognize library [login to view URL] the position
e) A tool that can recognize library objects and project a tag over them. Saving the position
f) A tool that for every position of the arm projects an image and the current angle of the arm.
g) A tool that prints the motor positions all the time. Example: “M1: 245 , M2: 98, M3 43”. Being M1 the horizontal motor, M2 the vertical one and M3 the focus Motor.
h) A tool that prints all the possible positions for M1 and M2
All these products can be tested individually but an script must group them like this
A script that runs a) , b), c), g),d),f) and h) in order to get a calibration of the whole [login to view URL] the position is non projectable a red Image will be shown, otherwise a green image.
A script that runs a) , b), c), g),d),f) and h) in order to get a calibration of the whole scenario. The user can use specific voice commands to tell the device to run this script. Example: “Calibrate the space”.If the position is non projectable a red Image will be shown, otherwise a green image.
While a) , b), c), g) and d) are running the user can use specific voice commands to tell the device to go over, below, to the right or to the left of a recognizable object. Example: “Project below the painting”.
While a) , b), c) ,g) and d) are running the user can use specific voice commands to tell the device to save a position with a name. Example: “Save Position 428 as Cabinet”.
While a) , b), c), g) and d) are running the user can use specific voice commands to tell the device to go to a saved position. Example: “Project on Cabinet”.
The user can use specific voice commands to tell the device to turn off. Writing h) to M1:0 and M2:0
A script to recognize voice commands and print them on a [login to view URL]: “Turn the light red”. This is for another development.
The hardware prototype is already built so everything can be tested when finished.
19 freelances font une offre moyenne de 2321 € pour ce travail
Greeting, I could help you with your project I am very excited to work on your project and help you get it done in a short time My offer includes: 1- complete commented code with code reference documents 2- wiring di Plus
Hi there, I have read the brief details on the job listing. You can check my experience, customer feed backs and my portfolio here: https://www.freelancer.com/u/AwaisChaudhry?w=f I believe its a doable job I have grea Plus
Hi, How are you? Very happy to bid your project because my skills are fitted in your project. I am 12 years experience in Machine learning, OCR, Image processing and computer vision. I have built the apps for face rec Plus
I Will Provide services of Arduino, ESP32 and ESP8266 from basic to high level programming. I have done more than 500 projects. Which are related to Arduino, ESP32 and ESP8266. Hardware : ESP8266 Arduino Family E Plus
We will do your ML work I am writing this proposal in order to work for you in Software and Web Development. We are highly trained professional developers seeking to freelance and earn online. Having a flair in progra Plus
hi there we can check the surface using depth sensor and also focus using the motor but we need to calibrate it first this all can be done easily if you have any questions plz do ask me i cant tell everything here as Plus
Hi! I am a Machine Learning, python developer, Data Science, Computer Vision, and Data Analytics expert. I am working in the industry for the past 4 years. The main thing which differentiates me from others is professi Plus
Hi, We at Tecogno Solutions are a team of Passionate Data Science and Full Stack professionals having more than five years of combined experience in multiple areas including Backend, Frontend, Machine learning (ML), C Plus
I have experience to do similar project as yours. please ping me, and I will show my dem. The demo shows that object recognition on jetson nano, 2d camera motion anaysis and control software and so on. reguards.
We are a Product Design and Development company with expertise in embedded systems and robotics . We provide Services such as Embedded Firmware Development for Controllers and Processors , Embedded Hardware Design and Plus
Hello I am a mechanical engineer having done various projects and courses on machine learning, computer vision and Python programming. I have been working with different universities and industries on object tracking a Plus
What's your budget and timeline for this project? I have good knowledge and experience Hi there, I'm a Full Stack Developer with 5+ years experience - completed over 300+ project click CHAT for a quick review about Plus
Hi, Client. I am a python/C++ expert and have a lot experience of ML. Your project is right for me. If you select me for your project, you will necessarily success. High quality, fast delivery and friendly service is p Plus