Can you sample a vertical movement of the device? - Android Software Development

I'm trying to sample a gesture of physically moving the device up and down (not necessarily tilting it) - is this possible?
The only values that i get either from the accelerometer/orientation/magnetometer is from tilting the device..

yes.
download "Sensor Test" from the market to see what sensor data is available to your program
sensing a jerk is easy simply root sum squates (RSS) the accelerometer values and test against some threshold.
If you want to know up from down you need to rotate the acceleration values by the normalized orientation vector.
Then integrate acceleration with respect to time.
p = (1/2)at^2 + vt +p;
if you want relative sample to sample movements you can simplify by assigning initial conditions to 0... thus
p = 1/2at^2.
p = change in position
t = change in time from sample to sample.
now the accelerometer is probably only accurate to 0.1 G which is very poor
so don't expect super accurate distances.

machine learning gesture algorithm
Thanks, but unfortunately "elevator" like movements can't be detected by simply using any of the sensors.
i guess for that you'll need to manipulate some data using the device's camera.
or for a simpler solution, check if any natural vertical movement is joined with some tilt to the device which can be easily sensed by any of the built in sensors.
But for anther topic, do you know by any chance where could i find a working machine learning gesture algorithm?

Related

[proof of concept app]Gesture recognition

I recently saw this tread:
http://forum.xda-developers.com/showthread.php?t=370632
i liked the idea, and when i thought about it, gesture recognition didn't seem to hard. And guess what - it really wasn't hard
I made a simple application recognizing gestures defined in an external configuration file. It was supposed to be a gesture launcher, but i didn't find out how to launch an app from a winCE program yet. Also, it turned out to be a bit to slow for that because of the way i draw gesture trails - i'd have to rewrite it almost from scratch to make it really useful and i don't have time for that now.
So i decided to share the idea and the source code, just to demonstrate how easy it is to include gesture recognition in your software.
My demo app is written in C, using XFlib for graphics and compiled using CeGCC so you'll need both of them to compile it (download and install instructions are on the Xflib homepage: www.xflib.net)
The demo program is just an exe file - extract it anywhere on your device, no installation needed. You'll also need to extract the gestureConfig.ini to the root directory of your device, or the program won't run.
Try some of the gestures defined in the ini (like 'M' letter - 8392, a rectangle - 6248, a triangle - 934), or define some of your own to see how the recognition works. Make sure that you have a string consisting of numbers, then space of a tabulator (or more of them) and some text - anything will do, just make sure that there's more than just the numbers in each line. Below you can set the side sensitivity to tweak recognition (see the rest of the post for description on how it works). Better leave the other parameter as it is - seems to work best with this value.
Now, what the demo app does:
It recognizes direction of drawn strokes, and prints them on the bottom of the screen as a string of numbers representing them (described bellow). If a drawn gesture matches one of the patterns in the config file, the entire drawn gesture gets highlited. It works the best with stylus, but is usable with finger as well.
Clicking the large rectangle closes the app.
And how it does it:
The algorithm i used is capable of recognizing strokes drawn in eight directions - horizontally, vertically and diagonally. Directions are described with numbers from 1 to 9, arranged like on a PC numerical keyboard:
Code:
7 8 9
4 6
1 2 3
So a gesture defined in config as 6248 is right-down-left-up - a ractangle.
All that is needed to do the gesture recognition is last few positions of the stylus. In my program i recorded the entire path for drawing if, but used only 5 last positions. The entire trick is to determine which way the stylus is moving, and if it moves one way long enough, store this direction as a stroke.
The easiest way would be to subtract previous stylus position from current one, like
Code:
vectorX=stylusX[i]-stylusX[i-1];
vectorY=stylusY[i]-stylusY[i-1];
[code]
But this method would be highly inaccurate due to niose generated by some digitizers, especially with screen protectors, or when using a finger (try drawing a straight line with your finger in some drawing program)
That's why i decided to calculate an average vector instead:
[code]
averageVectorX=((stylusHistoryX[n]-stylusHistoryX[n-5])+
(stylusHistoryX[n-1]-stylusHistoryX[n-5])
(stylusHistoryX[n-2]-stylusHistoryX[n-5])
(stylusHistoryX[n-3]-stylusHistoryX[n-5])
(stylusHistoryX[n-4]-stylusHistoryX[n-5]))/5;
//Y coordinate is calculated the same way
where stylusHistoryX[n] is the current X position of stylus, and stylusHistoryX[n-1] is the previous position, etc.
Such averaging filters out the noise, without sacrificing too much responsiveness, and uses only a small number of samples. It also has another useful effect - when the stylus changes movement direction, the vector gets shorter.
Now, that we have the direction of motion, we'll have to check how fast the stylus is moving (how long the vector is):
Code:
if(sqrt(averageVectorX*averageVectorX+averageVectorY*averageVectorY)>25)
(...)
If the vector is long enough, we'll have to determine which direction it's facing. Since usually horizontal and vertical lines are easier to draw than diagonal, it's nice to be able to adjust the angle at which the line is considered diagonal or vertical. I used the sideSensitivity parameter for that (can be set in the ini file - range its is from 0 to 100). See the attached image to see how it works.
The green area on the images is the angle where the vector is considered horizontal or vertical. Blue means angles where the vector is considered diagonal. sideSensitivity for those pictures are: left one - 10, middle - 42 (default value, works fine for me ), right - 90. Using o or 100 would mean that horizontal or vertical stroke would be almost impossible to draw.
to make this parameter useful, there are some calculations needed:
Code:
sideSensitivity=tan((sideSensitivity*45/100)*M_PI/180);
First, the range of the parameter is changed from (0-100) to (0-22), meaning angle in degrees of the line dividing right section (green) and top-right (blue). hat angle is then converted to radians, and tangent of this angle (in radians) is being calculated, giving slope of this line.
Having the slope, it's easy to check if the vector is turned sideways or diagonal. here's a part of source code that does the check, it is executed only if the vector is long enough (condition written above):
Code:
if( abs(averageVectorY)<sideSensitivity*abs(averageVectorX) ||
abs(averageVectorX)<sideSensitivity*abs(averageVectorY)) //Vector is turned sideways (horizontal or vertical)
{
/*Now that we know that it's facing sideways, we'll check which side it's actually facing*/
if( abs(averageVectorY)<sideSensitivity*averageVectorX) //Right Gesture
gestureStroke='6'; //storing the direction of vector for later processing
if( abs(averageVectorY)<sideSensitivity*(-averageVectorX)) //Left Gesture
gestureStroke='4';
if( abs(averageVectorX)<sideSensitivity*(averageVectorY)) //Down gesture
gestureStroke='2';
if( abs(averageVectorX)<sideSensitivity*(-averageVectorY)) //Up gesture
gestureStroke='8';
}
else
{ //Vector is diagonal
/*If the vector is not facing isdeways, then it's diagonal. Checking which way it's actually facing
and storing it for later use*/
if(averageVectorX>0 && averageVectorY>0) //Down-Right gesture
gestureStroke='3';
if(averageVectorX>0 && averageVectorY<0) //Up-Right gesture
gestureStroke='9';
if(averageVectorX<0 && averageVectorY>0) //Down-Left gesture
gestureStroke='1';
if(averageVectorX<0 && averageVectorY<0) //Up-Left gesture
gestureStroke='7';
}
Now, we have a character (i used char type, so i can use character strings for string gestures - they can be easily loaded from file and compared with strcmp() ) telling which way the stylus is moving. To avoid errors, we'll have to make sure that the stylus moves in the same direction for a few cycles before storing it as a gesture stroke by increasing a counter as long as it keeps moving in one direction, and resetting it if it changes the direction. If the counter value is bigger than some threshold (pathSensitivity variable is used as this threshold in my program), we can store the gestureStroke value into a string, but only if it's different from previous one - who needs a gesture like "44444" when dragging the stylus left?
After the stylus is released, you'll have to compare generated gesture string to some patterns (eg. loaded from a configuration file), and if it matches, do an appropriate acton.
See the source if you want to see how it can be done, this post already is quite long
If you have any questions, post them and i'll do my best to answer.
Feel free to use this method, parts of, or the entire source in your apps. I'm really looking forward to seeing some gesture-enabled programs
Very nice work. Reading your post was very insightful, and I hope this can provide the basis for some new and exciting apps!
great app... and well done for not just thinking that seems easy... but actually doing it...
ive been a victim of that myself
very nice work man.. one question in which tool did you write code.. i mean it looks like C but how you test and all..
Great app, i see that it is just proof of concept at this stage, but i see that it can be used in future applications control
Continiue with your great work
nik_for_you said:
very nice work man.. one question in which tool did you write code.. i mean it looks like C but how you test and all..
Click to expand...
Click to collapse
It is C, (no "++", no "#", no ".NET", just god old C ) compiled with opensource compiler CeGCC (works under linux, or under windows using cygwin - a unix emulator), developed in opensource IDE Vham (but even a notepad, or better notepad++ would do), tested directly on my Wizard (without emulator). I used XFlib which simplifies graphics and input handling to a level where anyone who ever programed anything at all should be able to handle it - it providea an additional layer between the programmer and the OS. You talk to Xflib, and Xflib talks to the OS. I decided to use this library, because i wanted to try it out anyway.
If i decide to rewrite it and make an actual launcher or anything else out of it, i'll have to use something with a bit faster and more direct screen access (probably SDL, since i already done some programing for desktop PC with it) - XFlib concentrates on usage of sprites - like 2D console games. every single "blob" of the gesture trail is a separate sprite , which has to be drawn each time the screen is refreshed - that is what slows down the app so much. The gesture recognition itself is really fast.
Very good program i just test it and it works very well some combinaison are pretty hard to realize but i like this blue point turning red with command2 and 934. Goond luck , i'll continue to see your job maybe you'll code a very interesting soft.
Interesting work.... would like to see this implemented in an app, could be very useful.
If you want I have some code I did for NDS coding, and which I ported on PocketPC for XFlib.
It works perfectly well and I use it in Skinz Sudoku to recognize the drawn numbers.
The method is pretty simple : when the stylus is pressed, enter the stylus coordinates in a big array. And when it's released, it takes 16 points (could be changed depending on what you need) at the same distance from each other, checks the angle, and gives you the corresponding 'char'.
To add new shapes, it's just a 15 character string which you link to any char (like, link right movement to 'r', or 'a', or a number, or whatever ^^). It works for pretty much any simple shape, and I even used it to do a graffitti-like thing on NDS which worked really well
Hey!
How do you get the last Stylus Positions?
and how often do you read them out
I want to realice such a code under vb.net, but I don't know how i should read out the last stylus positions, to get them perfectly for such calculations
Code:
Private Sub frmGesture_MouseMove(ByVal sender As Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles MyBase.MouseMove
If StylusJump = 1 Then
StylusJump += 1
If (CurrentStylusPosition.X <> frmGesture.MousePosition.X) Or (CurrentStylusPosition.Y <> frmGesture.MousePosition.Y) Then
LastStylusPosition(9).X = LastStylusPosition(8).X
LastStylusPosition(9).Y = LastStylusPosition(8).Y
LastStylusPosition(8).X = LastStylusPosition(7).X
LastStylusPosition(8).Y = LastStylusPosition(7).Y
LastStylusPosition(7).X = LastStylusPosition(6).X
LastStylusPosition(7).Y = LastStylusPosition(6).Y
LastStylusPosition(6).X = LastStylusPosition(5).X
LastStylusPosition(6).Y = LastStylusPosition(5).Y
LastStylusPosition(5).X = LastStylusPosition(4).X
LastStylusPosition(5).Y = LastStylusPosition(4).Y
LastStylusPosition(4).X = LastStylusPosition(3).X
LastStylusPosition(4).Y = LastStylusPosition(3).Y
LastStylusPosition(3).X = LastStylusPosition(2).X
LastStylusPosition(3).Y = LastStylusPosition(2).Y
LastStylusPosition(2).X = LastStylusPosition(1).X
LastStylusPosition(2).Y = LastStylusPosition(1).Y
LastStylusPosition(1).X = CurrentStylusPosition.X
LastStylusPosition(1).Y = CurrentStylusPosition.Y
CurrentStylusPosition.X = frmGesture.MousePosition.X
CurrentStylusPosition.Y = frmGesture.MousePosition.Y
End If
Dim LabelString As String
Dim iCount As Integer
LabelString = "C(" & CurrentStylusPosition.X & "\" & CurrentStylusPosition.Y & ")"
For iCount = 1 To 9
LabelString = LabelString & " " & iCount & "(" & LastStylusPosition(iCount).X & "\" & LastStylusPosition(iCount).Y & ")"
Next
lblGesture.Text = LabelString
ElseIf StylusJump <= 3 Then
StylusJump += 1
Else
StylusJump = 1
End If
End Sub
Sorry, i didn't notice your post before. I guess that you have the problem solved now, that you released a beta of gesture launcher?
Anyway, you don't really need 10 last positions, in my code i used only 5 for calculations and it still worked fine.
Nice thread, thanks for sharing.
Human-,achine interface has always been an interesting subject to me, and the release of ultimatelaunch has sparked an idea. I am trying to acheive a certain look-and-feel interface - entirely using components that are today screen and ultimatelaunch compatible. Basically a clock strip with a few status buttons at the top, and an ultimatelaunch cube for the main lower portion of the screen - gesture left/right to spin the cube, and each face should have lists of info / icons scrolled by vertical gesture. I'm talking big chunky buttons here - tasks, calendar appts, (quick) contacts, music/video playlists - all vertical lists, one item per row, scrolling in various faces of the cube.
Done the top bit using rlToday for now. Set it to type 5 so scrollbars never show for the top section. All good. Cobbling together bits for the faces, but few of the apps are exactly what I want, some (like that new face contacts one) are pretty close, and being a bit of an armchair coder, I thought now would be a good opportunity to check out WM programming and see if I can't at least come up with a mockup of what I want if not a working app.
I was wondering if anyone could advise me on whether I should bother recognising gestures in such a way as this. Does WM not provide gesture detection for the basic up, down, left, right? Actually, all the stuff I have in mind would just require up/down scrolling of a pane - I was thinking that I may well not need to code gesture support at all, just draw a vertical stack of items, let it overflow to create a scrollbar and then just use the normal WM drag-to-scroll feature (if it exists) to handle the vertical scrolling by gesture in the face of the cube. I would rather keep the requirements to a minimum (eg touchflo), both for dependancy and compatibility reasons, so maybe doing the detection manually would be the "way to go", I dunno.
Did you release source with the app launcher? A library maybe? I intend to go open source with anything I do, so if you are doing the same then I would love to have access to your working code
Nice work man.
Impressive.

[APP]Water-level/spirit-level

Current version is SpiritLevel_0.9.1.zip http://forum.xda-developers.com/attachment.php?attachmentid=98776
Now I re-engineered the spirit level app a little bit.
It can now also show the raw X, Y and Z tilt data. Thus, you can easily see how much the g-sensor might need to be calibrated. Just check the option "Show Raw Sensor Data" from the menu.
Calibrating for this app ONLY available in SpiritLevel_SVC0.9.zip
works very well, thx!
Great little App, nice one! The trick is using it on a surface that is flat and level to get an accurate reading.
Putting it on my desk at work:
x: 1 degree
y: 7 degree
x Tilt: -19
y Tilt: 117
z Tilt: -879
This keeps changing tho, which would indicate that the surface isnt very level.
Also looking at where the indicators are, it looks slightly off, but not enough to worry me thinks.
Congrats on the useful little app. Cheers!
Great app, but what if the g-sensor is not completely accurate?
as in this thread
http://forum.xda-developers.com/showthread.php?t=405691&highlight=sensor
Well, I could add an option that (somehow) calibrates the g-sensor, but only for this app. E.g. a menu entry "Calibrate" which, then asks you to put it on a perfect spot with 0° on X and Y axis and then maybe to 90 ° on X and Y axis. Maybe it will come - depends on my time. Maybe there is also a registry entry for this for the g-sensor.
fxxxxxx said:
Well, I could add an option that (somehow) calibrates the g-sensor, but only for this app. E.g. a menu entry "Calibrate" which, then asks you to put it on a perfect spot with 0° on X and Y axis and then maybe to 90 ° on X and Y axis. Maybe it will come - depends on my time. Maybe there is also a registry entry for this for the g-sensor.
Click to expand...
Click to collapse
thanks ;-)
Menkul
GREAT app ;-)
And even better if a calibration option is added.
This would help alot of people
fxxxxxx said:
Well, I could add an option that (somehow) calibrates the g-sensor, but only for this app. E.g. a menu entry "Calibrate" which, then asks you to put it on a perfect spot with 0° on X and Y axis and then maybe to 90 ° on X and Y axis. Maybe it will come - depends on my time. Maybe there is also a registry entry for this for the g-sensor.
Click to expand...
Click to collapse
this would be a great option. Currently the app displays x tilt: 270 when my phone is laying down on my desk. and it would be great if i could calibrate it from this position so that X an Y are back to 0
Ok, there is a new version in post #1. With that version you can calibrate the g-sensor, but ONLY for the spirit-level/water-level application. Currently it has no other effect. I am not sure if there is a way to "calibrate" the g-sensor globally.
On the table in my office I get:
X: 3-4°
Y: 355-357°
X Tilt: -59
Y Tilt: -59
Z Tilt: 820
But the values are not steady, even if I do not move the device the numbers
are changing about twice a second.
Do others have the same behavior?
Yes, it is the same behavior here.
X: 2-3°
Y: 357-359°
X Tilt: ~ -39
Y Tilt: ~ -19
Z Tilt: ~ -938
Keep in mind, that 358 ° is only 2° from 0 ° away.
cool app
TML1504 said:
On the table in my office I get:
X: 3-4°
Y: 355-357°
X Tilt: -59
Y Tilt: -59
Z Tilt: 820
But the values are not steady, even if I do not move the device the numbers
are changing about twice a second.
Do others have the same behavior?
Click to expand...
Click to collapse
I have the same experience regarding the switching values.
fxxxxxx said:
Keep in mind, that 358 ° is only 2° from 0 ° away.
Click to expand...
Click to collapse
yop, thats clear!
at least i hope so...
if not i would have to return my degree in mechanical engineering
TML1504 said:
if not i would have to return my degree in mechanical engineering
Click to expand...
Click to collapse
me too
sorry for OT
nice software
ykat said:
cool app
I have the same experience regarding the switching values.
Click to expand...
Click to collapse
Me 2.............
X: 353
Y: 8
X Tilt:96
Y Tilt: 136
Z Tilt: -899
X: 0
Y: 2
X Tilt: 0 (+/- 5)
Y Tilt: 37 (+/-2)
Z Tilt: -770 (+/-15)
Not so bad ^_^
And, as Diamond back cover isn't really flat, I tried watch results when the face of the Diamond was on my desk... hum... not transparent huhu. So I launched Resco Screen Capture (comes with Resco Photo Viewer) and take the screen after 10sec (I made 5 test)
Results :
X: 180 (only 1 time with 179)
Y: 179 (1° better )... and one time : 180°
X Tilt:0 (each time)
Y Tilt : ~8 (different values : 18, 18, 8, 8, 3)
Z Value : ~1200 (different values : 1206, 1206, 1201, 1190, 1172)
Calibration bug fixed, new version in post #1
yess! very good, now it's a real tool for my job. But i've question : the calibrating function is very important for the diamonds missaligned. But can you add an option for saving data after calibrating? it isn't possible to calibrate in any situation and it can be fun to have a single calibration a the first start of spirit-level... thanks by advance and sorry for my bad english

Camera Blur Gone - Simple Fix

By simply changing the brightness level to -2.0 you will no longer have a blur - it's like changing the ISO on a camera and the FPS will jump by over 50%. check it out in full here: http://www.fuzemobility.com/decrease-the-blur-of-your-camera-really/
bugsykoosh said:
By simply changing the brightness level to -2.0 you will no longer have a blur - it's like changing the ISO on a camera and the FPS will jump by over 50%. check it out in full here: http://www.fuzemobility.com/decrease-the-blur-of-your-camera-really/
Click to expand...
Click to collapse
Sounds excellent, will try this out!!
EDIT
I have tried this and can find no improvement whatsoever. As kkchan stated below, all I notice is the picture has become darker due to the decrease in brightness level. I have even tried this at -1.0, but still no improvement
Hi,
I tried, no improvement, sometime seem even worst because the photo become darker.
kkchan said:
Hi,
I tried, no improvement, sometime seem even worst because the photo become darker.
Click to expand...
Click to collapse
I did it myself on a Touch HD and went from +2 and changed it to -2 and it was a world of difference. At +2 any movement was a blur and at -2 I could move the camera and still get a clean shot
Fallen Spartan said:
Sounds excellent, will try this out!!
EDIT
I have tried this and can find no improvement whatsoever. As kkchan stated below, all I notice is the picture has become darker due to the decrease in brightness level. I have even tried this at -1.0, but still no improvement
Click to expand...
Click to collapse
In extreme light situtations (a bright day) there's almost no difference in speed but the HD never had a problem in very high light. The real impact is in moderate to low light. So far there are 2 comments at FuzeMobility both stating that it works effectively and I tested it last night on the HD and took an unusable camera and had something that could take a photo. Did you guys enable the FPS info to see if there was a change?
bugsykoosh said:
In extreme light situtations (a bright day) there's almost no difference in speed but the HD never had a problem in very high light. The real impact is in moderate to low light. So far there are 2 comments at FuzeMobility both stating that it works effectively and I tested it last night on the HD and took an unusable camera and had something that could take a photo. Did you guys enable the FPS info to see if there was a change?
Click to expand...
Click to collapse
I had already tried tweaking a number of settings for both the camera & video to get a better pic. This may have had some effect on these new settings. I will play around with it and see what I come up with
Fallen Spartan said:
I had already tried tweaking a number of settings for both the camera & video to get a better pic. This may have had some effect on these new settings. I will play around with it and see what I come up with
Click to expand...
Click to collapse
What settings if you don't mind? I'm always up for more tweaking I know about decreasing the delay times:
HKLM/Software/HTC/Camera/Captparam/ and set CaptureTimer = 0 and EnableCapKeyDelay = 0 and CapKeyDelayTime = 0
and changing the panaramic photo size...I hope you have a few more though
bugsykoosh said:
What settings if you don't mind? I'm always up for more tweaking I know about decreasing the delay times:
HKLM/Software/HTC/Camera/Captparam/ and set CaptureTimer = 0 and EnableCapKeyDelay = 0 and CapKeyDelayTime = 0
and changing the panaramic photo size...I hope you have a few more though
Click to expand...
Click to collapse
Basically I read through a lot of threads regarding enhancing camera and have changed numerous things including those mentioned by yourself, some I can't remember to be honest and those mentioned in the wiki listed below. Also using HD Tweak etc
Increase Quality of Photos
To increase the quality of photos, open the Camera, go to Settings, Advanced, then Image Properties. Increase Contrast to 5, Saturation to 5, and Sharpness to 4. Now you will have better definition and much more realistic colours. Also don't forget to choose Super Fine under Quality in Advanced Settings menu.
Reduce Blurriness in Photos
To focus better, have shutter set just to Touch. After you have touched you have the whole three seconds to steady your hand and take a non-blurry photo.
Activate Hidden Photo Modes
You can activate hidden photo modes using the 'HD Tweak' app. Make sure to leave the resolutions for these modes at 1 megapixel though or they may not work properly. For for the more advanced users, use the following reg entries:
To enable Burst mode
Mobile Device\HKLM\Software\HTC\Camera\P6
set "Enable" on "1"
To enable Sports mode:
Mobile Device\HKLM\Software\HTC\Camera\P8
set "Enable" on "1"
To enable Video Share mode:
Mobile Device\HKLM\Software\HTC\Camera\P9
set "Enable" on "1"
To enable GPS Photo mode:
Mobile Device\HKLM\Software\HTC\Camera\P10
set "Enable" on "1"
Get True 5 Mega pixel Resolution When Using Camera
To get 5MP instead of 4MP resolution when using your camera, you must switch off Widescreen mode in the 'Advanced' Menu.
Normal screen = 2592 x 1944 pixels = 5,038,848 pixels = 5MP
Widescreen = 2592 x 1552 pixels = 4,022,784 pixels = 4MP
Get Better Video Quality When Using Camera
Use MPEG format instead of H.263 for better quality video output. This option can be found by going to the Advanced Setting from the Video Settings menu. Once there, go to Capture Format and change format if necessary.
If I can think of any more I'll let you know.
These threads may help:
Best Camera Setting for Taking Pictures
Blackstone Camera 2009 (HTC) Discussion
Fallen Spartan said:
Basically I read through a lot of threads regarding enhancing camera and have changed numerous things including those mentioned by yourself, some I can't remember to be honest and those mentioned in the wiki listed below. Also using HD Tweak etc
Increase Quality of Photos
To increase the quality of photos, open the Camera, go to Settings, Advanced, then Image Properties. Increase Contrast to 5, Saturation to 5, and Sharpness to 4. Now you will have better definition and much more realistic colours. Also don't forget to choose Super Fine under Quality in Advanced Settings menu.
Reduce Blurriness in Photos
To focus better, have shutter set just to Touch. After you have touched you have the whole three seconds to steady your hand and take a non-blurry photo.
Activate Hidden Photo Modes
You can activate hidden photo modes using the 'HD Tweak' app. Make sure to leave the resolutions for these modes at 1 megapixel though or they may not work properly. For for the more advanced users, use the following reg entries:
To enable Burst mode
Mobile Device\HKLM\Software\HTC\Camera\P6
set "Enable" on "1"
To enable Sports mode:
Mobile Device\HKLM\Software\HTC\Camera\P8
set "Enable" on "1"
To enable Video Share mode:
Mobile Device\HKLM\Software\HTC\Camera\P9
set "Enable" on "1"
To enable GPS Photo mode:
Mobile Device\HKLM\Software\HTC\Camera\P10
set "Enable" on "1"
Get True 5 Mega pixel Resolution When Using Camera
To get 5MP instead of 4MP resolution when using your camera, you must switch off Widescreen mode in the 'Advanced' Menu.
Normal screen = 2592 x 1944 pixels = 5,038,848 pixels = 5MP
Widescreen = 2592 x 1552 pixels = 4,022,784 pixels = 4MP
Get Better Video Quality When Using Camera
Use MPEG format instead of H.263 for better quality video output. This option can be found by going to the Advanced Setting from the Video Settings menu. Once there, go to Capture Format and change format if necessary.
If I can think of any more I'll let you know.
These threads may help:
Best Camera Setting for Taking Pictures
Blackstone Camera 2009 (HTC) Discussion
Click to expand...
Click to collapse
Unfortunately I know about those Thank you though - I'm sure a lot of people will benefit from that list. The ability to do infinite zoom is still elusive as far as I know...
bugsykoosh said:
Unfortunately I know about those Thank you though - I'm sure a lot of people will benefit from that list. The ability to do infinite zoom is still elusive as far as I know...
Click to expand...
Click to collapse
More settings/tweaks included in those 2 threads I believe though

Drawing on top of a camera view

Hi there. I'm making an AR app and I couldn't figure out a way to draw on top of a camera view. I've created a custom SurfaceView class which uses Camera.startPreview and Camera.setPreviewDisplay to draw the real time feed from the camera upon it. Now I'm trying to render something on it. I have another SurfaceView which has a painter thread which is running with 30 fps and is drawing every frame on a Canvas obtained by SurfaceHolder.lockCanvas(). I put the two views in a FrameLayout like this
Code:
Preview cv = new Preview(
this.getApplicationContext());
Pano mGameView = new Pano(this.getApplicationContext());
FrameLayout rl = new FrameLayout(
this.getApplicationContext());
setContentView(rl);
rl.addView(cv);
rl.addView(mGameView);
And sadly it doesn't work. It shows only the camera feed. If I switch the order like this
Code:
rl.addView(mGameView);
rl.addView(cv);
The camera feed dissipates and only the painter is visible...
How should I make it work
Phew. Just to tell you I found a solution. You add this line
Code:
getHolder().setFormat(PixelFormat.TRANSLUCENT);
in the initialization of your overlay view

[Q] proximity sensor accurate values

Hi,
using the service diagnostic application on my s4 I can read a range of values (ADC) from proximity sensor spanning from 0 to 255 (they are more or less proportional to the object distance). Is there a method to get this values within a native (or ndk) app. With the standard android api I get only near/far value.
Thanks,
Marco
marco.co said:
Hi,
using the service diagnostic application on my s4 I can read a range of values (ADC) from proximity sensor spanning from 0 to 255 (they are more or less proportional to the object distance). Is there a method to get this values within a native (or ndk) app. With the standard android api I get only near/far value.
Thanks,
Marco
Click to expand...
Click to collapse
*#0*#
Sensors.
"If someone helps, never forget to hit thanks ? "
DeepankarS said:
*#0*#
Sensors.
"If someone helps, never forget to hit thanks ? "
Click to expand...
Click to collapse
I know that, I would like to replicate the same behaviour by code into one of my apps.

Categories

Resources