HiTrack - a novel eye/head tracking SDK for android phone - IDEs, Libraries, & Programming Tools

HiTrack is a novel eye/head tracking SDK.
HiTrack only use front-camera to detect the center of pupil, track the movement of eyes and head. Then extracts eye movement, look away, sight focus and other events. Based on these events, users can play game, move cursor, pause video …
Just with several lines of codes, developers can make eye-tracking apps. No need infrared camera or any other hardware.
The latest version is available at sightsoft.cn/hitrack/. It includes detailed documents and several demo projects.
One demo is SightPlane, that is to shoot plane with eye movements.
Any suggestions are welcome.

Really cool... Kinda like the built in features in touch wiz... I might be able to find a use for this in the future with some of my apps

Related

[Q] [APP IDEA] Converting a Monitor into a Touch Screen using android camera

Hey, I'm a Graphic Designer, and I always have weird and unique ideas. So far I haven't been able to get one developer interested in my ideas.
But this time it could be different. or not
This is probably just a novelty thing... i don't see it actually having a purpose, maybe if someone can expand on this idea... anyways
Here is my Concept!!
Firstly, on our PC/MAC MONITOR we stick a colourful (or QRCode-ish) sticker on each corner, with a server app running of some sort
Second, we grab our android camera and point to it... the viewfinder should translate the stickers to identify our touch-monitor area
Finally, by touching the viewfinder it should translate into "mouse click" at that location....
Why??? you may ask...
1)Its augmented reality... it comes with alot of brag rights..
2)If you're not arms length for some reason (like me when i'm lying down and watching tv shows on my monitor)...you can simply download ur next tv show without having to get up. I am trying to think of more reason, but I hope you guys can think of some.
In anycase, if you are interested in developing this app, please let me know I would love to help out with the GUI if you need me to.

[Q] Augmented Reality App Development

I would like to know how the AR app Layar/junaio works. I want to build a similar app but for indoor guide of a particular building. The building is fixed and the plan inside the building doesn't change.
The crux of the app would be to guide visitors of the building to their destinations.
End Result: Possibly throw a map as output or provide wikitude like drive.
Typical Usage:
Visitor visits the building, scans the QR code posted at the entrance and downloads the app from appstore/market
At the lobby, user opens the downloaded app and scans another huge picture (Can be barcode or any unique picture)
The app shows the different areas of the building just like layar or junaio and then user selects a particular area and requires a navigation map as output from the point.
Will be using different unique pictures/barcodes at different point to locate the users location instead of relying on gps as it would not help indoors for navigation.
Any inputs on how to start would be of great help. I'm an experienced mobile app developer and have developed regular utility apps, api mashups etc. But nothing in AR.
AR apps usually use a combination of GPS, Compass, Altitude, Accelerometer, and gyroscope data to figure out the exact location of a device. They then access the camera view and place graphics on top of it.
The problem with your idea is that most AR apps usually come in one of two forms:
- Outside without pinpoint accuracy to position, (the device knows exactly the heading so the direction is perfect but the distance may be a litte off)
- Based off a target symbol (think 3ds card, barcode, etc) to keep a point of reference in the scene
Unfortunately your application would require you to be indoors so it would be extremely hard to figure out the exact position because of the GPS loss. At best you could have a series of "scanable qrcodes" to then display a map and direct the user to the next point. You could determine the exact position of a user by calculating the change in gyroscope data from the beginning to the current time, but that would hard especially dealing with inaccuracies and interruptions.
I think the safest bet for your app would be to have codes at certain points to help at your users. If anyone else has a better answer please correct me XD
Dsbtwins said:
You could determine the exact position of a user by calculating the change in gyroscope data from the beginning to the current time, but that would hard especially dealing with inaccuracies and interruptions.
Click to expand...
Click to collapse
That would be epic... but not all phones have gyros. Soo...
What about having devices setup to emit a distinct directional non audible noise that the phone will listen for and use that to key the user to what room they are in. All phones have a microphone.
Or key off music in a room if there is any
From something awesome
Dsbtwins said:
At best you could have a series of "scanable qrcodes" to then display a map and direct the user to the next point.
I think the safest bet for your app would be to have codes at certain points to help at your users. If anyone else has a better answer please correct me XD
Click to expand...
Click to collapse
Well, I have already discounted the use of gps. Series of QR/Scannable Codes
should help me solve the problem. Since I would only be showing a map from the QR/Scannable Code point.
Let me give you an example of how I would like to implement in real life.
User enters walmart, opens the app after scanning the first scannable code, he gets to see the different areas/offers on the camera view (AR)
and then would like to give a map output on how to reach within, when the user selects say "groceries" on the camera view. The app should give a map output on how to reach the groceries area from the scanned location.
I believe lot of people get lost in locating things in a large retail store. Walmart was just an example to explain, there are other stores which have this issue.
Guys, your inputs on the idea itself are welcome.
killersnowman said:
That would be epic... but not all phones have gyros. Soo...
What about having devices setup to emit a distinct directional non audible noise that the phone will listen for and use that to key the user to what room they are in. All phones have a microphone.
From something awesome
Click to expand...
Click to collapse
I dont think it will work in a noisy environment like malls, retail stores etc. But works well in a ambient place like museum. Though ur idea is brilliant, dont know how it will work in reality.
Yes that sounds like it should be possible. An honestly what you could do is have a large "maker" (some sort of distinguishable shape and color or pattern) on a cube in the middle of the store, that way when the user holds up his phone to look at it, the phone will know what he is looking at depending on which face on the cube it is. you can then use that as a reference in the view and map out location icons over the view.

Contest: Win a ARM CORTEX M0 Development Board

We're giving away a STM32 F0 DISCOVERY development board.
Contest
Enter your idea in this thread
add pictures, links, video or whatever else you can add.
Do anything you can (except create new accounts) to get people to click the thanks button
Contest ends Saturday, 23 June 2012
Winner gets a STM32f0 Discovery board
Rules
Anyone caught creating new accounts will be disqualified. There are automated systems in place for detecting this which alert admins and senior moderators to pay attention to your new account for review.
One post per person. Multiple posts will be deleted. No exceptions
Use of social media (Google+, Twitter, Facebook, Youtube) are encouraged.
TLDR
You have one week. Put your idea in this thread, then get on your favorite social media service to get people to click that thanks button on your post.
I'm going to use it to educate myself.
I'm going to use it to write a High performance micro kernel (i will later fix up for cortex-A8, HD2 ) and for testing the ARMASM code i write, (which i currently test on my only phone, HD2 and it's painful to do so thanks to HTC's SPL, MPU, NAND fatigue and the fact that i need it working the next day).
It has Thumb (2) support so i will try my hand at that too, thumb2 promises quite a lot of code density with somewhat the same performance.
Also I'd be porting the Little Kernel to it, which already has support for cortex-M3
I'll use it to make the word a better place........just joking ......mass destruction awaits if i get that........so don't give it to meh.
Basically because I have no idea how to use a development board, I'm gonna use it to learn how to use one and also learn to code which is something I have been looking into. So yeh....
i will use it to develop a better wireless usb card. i already have one. a arthos 2255. i would like to mod these too together for use with any O.S. my idea would be that you just plug and play. kida like a gui. you plug it in and a window comes up and you can see the progress of it emulating it self into your system O.S and any hardware without internet. might have to put bigger a storage device. but it can be done.. AND I WILL DO IT!
Nerdie stuff
Well im going to mod my phone and learn how to be a android developer
I want to add it to my LEGO collection >
I am joking around ... I love u developers
I'm going to find out what it is.
Unboxing video and then blend it
Sent from my XT910 using Tapatalk 2
What I'll do with it...
I would try and create a WiFi cracker with it by connecting a WiFi midule to it and also try to run the Android OS on it, and finally I would do some home automation on it, DLNA and remote controlling various things.
Please hit thank you!
I really really want this...
I will build full framework that connects your Discovery to internet (home network => public IP, if you have one), runs web server, gathers data from all over the house via NRF24L01-based wireless network (another small ATTiny based modules with humidity, temperature, .. sensors, controlling lights and power etc) and provides them on web page.
I would give it one of the devs for the lg optimus thrill/3d because I'm not a dev but it would probably help development for my device greatly and mabie we could get some good stuff going on this phone
I would use it to play around with android.
*se-nsei. said:
I'm going to find out what it is.
Click to expand...
Click to collapse
Hahaaha, I'm in the same boat with you
I would use it to develop a custom AOSP-based image for use as a low-cost media tablet, with IR, DLNA, Remote Control and a TV tuner for the ultimate lounge room accessory.
A Real Car-puter
I would use it, in conjunction with an application board, to build a carputer...not one that allows you to listen to or watch pirated media. One which will automate things such as wipers and headlights. For the wipers, my car has only one intermittent setting so I would like to add in more settings. Maybe also look into rain sensors at some point but not initially. Headlights will be controlled by time (automatically coming on at night)and light sensors to turn on lights during daytime hours when lighting conditions are poor or if I am driving through a tunnel). Also use speed limit information hacked from gps maps to light up my dash gauges with different colours depending on my current speed and the posted speed limit (Red > 5% over speed limit, green for 10% under to 5% over the speed limit blue > 10% under the speed limit and no colour for missing speed limit information.
I know the usage is light for such a board but it leaves room for expansion and the projects I have here seem to be a good starting point for learning with.
Future projects could include controlling things such as ignition, doors and windows, heating etc from my phone and eventually building a customised alarm system. Also, some sort of laser mounted to a servo that will project a line/image onto the road to give following drivers a guide to what distance they should be from you depending on the speed you are travelling.
Good luck to everyone that enters.
First, I would learn how to use it then use it to get the Robitics merit badge(im in scouts) and show other scouts how to do it. I would also integrate it somehow into my science project that will help people(still have some planning to do :/) for school.
Good Luck
I'm going to use it to build a giant robotic Obama
im going to use it for education and i will be using it for my 2 final years of high school making automated systems in my engineering and IT Classes!

[Project] Using the front camera for more than taking pictures n skype

Hello frnds,
I was just plain curious when a well known smartphone manufacturer started using the front camera for more than skype n photos. I wondered that my ICS tab has a front camera, a feature called face unlock which uses face features to unlock the device. So what if this camera and some code can be used to add features to the front camera n make it useful. I was also wondering can this data be utilized in other applications, this will open up then environment. Suppore a scroll,its done through touch n drag, our application can send this data to the foreground app i.e a document reader or a browser. The application can spoof it self as an input device in androids input menu. Thus it can do various things by sending appropriate data to apps. Im currently testing on windows using a webcam n some code to track eyeball n do actions through it. If im succesful enough Ill start coding an android app. But a healthy discussion and your views would be appreciated. So I hope ill see some replies. Tkcre.
many ics devices can be made to support face lock easily I don't have any idea about eye ball tracking

existing tools for project?

I'm studying for software engineering, and I wish to take on a small project involving making an android app.
The app is expected to involve GPS/google maps, the camera and requires object recognition.
Before starting to actually develop the app, I wish to make a proper sketch for how I'm going to do it. I was wondering if the following already exist as freeware and are possible:
GPS- does it return a X/Y coordinates only, or X/Y/Z (meaning, does it tell the phone how high it is above sea level?)
Google maps/Google earth- is it possible to get the app to take information such as a city layout from it? Also, does it include height information for the ground?
Camera and object recognition- the app is expected to "look" using the camera, and needs to recognize objects. The level of object details is not very important, but it should be able to recognize certain things such as roads, buildings, men and such.
Thanks for the assistance,
Raledon
Raledon said:
I'm studying for software engineering, and I wish to take on a small project involving making an android app.
The app is expected to involve GPS/google maps, the camera and requires object recognition.
Before starting to actually develop the app, I wish to make a proper sketch for how I'm going to do it. I was wondering if the following already exist as freeware and are possible:
GPS- does it return a X/Y coordinates only, or X/Y/Z (meaning, does it tell the phone how high it is above sea level?)
Google maps/Google earth- is it possible to get the app to take information such as a city layout from it? Also, does it include height information for the ground?
Camera and object recognition- the app is expected to "look" using the camera, and needs to recognize objects. The level of object details is not very important, but it should be able to recognize certain things such as roads, buildings, men and such.
Thanks for the assistance,
Raledon
Click to expand...
Click to collapse
I know about an android open source app called GPS share, which will give us a link of our current location, and allow to send that location as a SMS or any message(WhatsApp,BBM,etc).
If you want i could give you the link to the app page(if i can find it).
SufiyanSadiq said:
I know about an android open source app called GPS share, which will give us a link of our current location, and allow to send that location as a SMS or any message(WhatsApp,BBM,etc).
If you want i could give you the link to the app page(if i can find it).
Click to expand...
Click to collapse
After talking with a few people, I've come to realize how much of a problem object recognition is for computers, and that even simple tasks like recognizing roads, people and buildings can be a problem. Until I can solve this issue in a decent fashion, I'll have to put the project on hold.
Thanks for the help, though. I'll look the app up if/when I can continue the project.

Categories

Resources