USB Mouse support - Acer Iconia A500

Edit:
I released a compiled APK that I believe is done. Please check http://forum.xda-developers.com/showpost.php?p=13809172&postcount=8 for more info!
I made this over the past few days for my Iconia. Since we're going to be getting 3.1 soon, I don't think I want to bother cleaning it up and finishing it.
And, before someone says something about how it's crappy, it was the first android project (Not first Java/Linux) I've ever done, I think it came out pretty well. I've put most of the major issues into //TODO:'s (such as proper detection on what event to use)
Credits to whoever posted the service example I used. I looked, but I couldn't seem to track it back down.
(Feel free to do whatever you want with it, just please include my name somewhere on it. )

Hi, can you post a guide to install your mod? Thank you

sto88 said:
Hi, can you post a guide to install your mod? Thank you
Click to expand...
Click to collapse
Yak, I don't know how the zip is related to USB mouse support. Please tell us how to use the mouse.zip. Thanks very much.

netham45 said:
I made this over the past few days for my Iconia. Since we're going to be getting 3.1 soon, I don't think I want to bother cleaning it up and finishing it.
And, before someone says something about how it's crappy, it was the first android project (Not first Java/Linux) I've ever done, I think it came out pretty well. I've put most of the major issues into //TODO:'s (such as proper detection on what event to use)
Credits to whoever posted the service example I used. I looked, but I couldn't seem to track it back down.
(Feel free to do whatever you want with it, just please include my name somewhere on it. )
Click to expand...
Click to collapse
Firstly Great work to the OP, I never considered redirecting mouse events to the touch screen.
I hope you don't mind, but considering 3.1 is a little while away I'd like to upload this as an apk with a few tweaks.
Tweaks so far:
Moved the SU request to the app instead of the service, that way all su calls are caught by Superuser manager and permissions are requested appropriately.
Set boundaries so your mouse can't wander off the screen.
Changed the text cursor to a png cursor.
When I work out a way to detect which event entry is the mouse I'll upload the APK, until then it's not likely to work for most people.
Keep in mind when I do upload it'll be fragile and possibly won't work for everybody.
Now I'm going to get some much needed sleep

very nice. I looked into detecting the mouse automatically, all I could come up with is reading the dmesg logs and relating them to getevents' output, which seems a bit over the top for this.
Perhaps ask for what dev to use on launch?

netham45 said:
very nice. I looked into detecting the mouse automatically, all I could come up with is reading the dmesg logs and relating them to getevents' output, which seems a bit over the top for this.
Perhaps ask for what dev to use on launch?
Click to expand...
Click to collapse
Yeah that or have a detection mode which asks you to unplug the mouse if plugged in, then to plug it in and detect the added entry.

that would work too. You'll still neneed to match it with aan input from getevent, though.
Edit: Just use getevents. It returns every system event, and only the mouse ones will start with 0003.
Edit2: Ignore that edit. Doesn't return info like I was thinking it did.
Edit3: Actually..... my keyboard doesn't send any events that'd conflict.
(values are hex strings)
/dev/input/event4: 0001 0110 00000001 = left down
/dev/input/event4: 0001 0110 00000000 = left up
/dev/input/event4: 0001 0111 00000001 = right down
/dev/input/event4: 0001 0111 00000000 = right up
/dev/input/event4: 0002 0001 XXXXXXX = Y coord change
/dev/input/event4: 0002 0000 XXXXXXX = X coord change
Sec, I'll write up code to parse this type too, and attach it here.
Edit 4:Not going to work decently. getevent buffers far too much when not outputting to a tty.
Edit 5: Okay, realized that I could check what keycodes a device could send, and that there are some mouse specific keycodes. (getevent -p)
This should return the path to a mouse, or 'NONE' if there is no mouse.
Code:
String getMouse()
{
String Message = "";
String[] Parts;
String curDevice = "";
Process process = null;
boolean deviceFound = false;
boolean isInKey = false;
try {process = Runtime.getRuntime().exec("getevent -p");} catch (IOException e) {}
DataInputStream in = new DataInputStream(process.getInputStream());
try
{
while (true)
{
Message = in.readLine();
if (Message.indexOf("(") >= 0)
isInKey = false;
if (Message.indexOf("KEY (0001):") >= 0)
isInKey = true;
if (isInKey && Message.indexOf("0110") >= 0)
{
deviceFound = true;
break;
}
Parts = Message.trim().split(" ");
if (Parts.length <4) continue;
if(Message.indexOf("add device") >= 0){
curDevice = Parts[3];
}
}
}
catch(Exception a){}
if (!deviceFound)
{
curDevice="NONE";
Log.e("Mouse","No Mouse Found!");
}
else
Log.i("Mouse","Mouse Found At: " + curDevice);
return curDevice;
}
This works for the two mice I have that generate events, Logitech Wireless USB Mouse, and a generic PS2 -> USB adaptor. For some reason, my Razr Naga doesn't even generate events.

Okay, I felt that this warranted a doublepost. If not, sue me.
I've made this APK for mouse support.
Known issues:
Mouse cursor is slow
Android doesn't create events for all mice, so not all mice are compatible. Simple mice have the highest chance of working.
Makes frequent calls to root when trying to find a mouse. This will drain your battery, if you plan to unplug it for a long time, I suggest opening the app and hitting stop. Maybe someone could add a timeout to it?
Can't go onto the menu bar. I don't know how to draw over that too.
Features:
Automatically detects the mouse
Automatically disables cursor when mouse is unplugged
Automatically enables cursor when mouse is replugged
Supports clicking -AND- dragging!
This may not work for everyone. If it doesn't work for you, I'm sorry. Maybe you could talk one of the other devs here into fixing it.
This may work on other tablets, I'm not sure. There is a build compiled with Froyo libs too.
This program requires root.
Attached is the source and a compiled APK. I think I did the APK right, first one I've ever made.
If someone could let me know if this works on bluetooth mice, that'd be awesome. I think it might, but I don't know. I don't have one, and if it doesn't, I won't be the one to add support.

Do you suggest to install the standard or the froyo version on Iconia? I tried the standard and it worked fine =)

I don't think it should matter, they both work fine for me.
Glad to hear it works!

Since I've already finished this I'll post it anyway, this detects the mouse by detecting which mouse input device is missing when unplugged against that of when it's plugged in (this doesn't always work with complex mouses). It also allows you to override and choose your device, in which case if it doesn't work stop the service and try another device.
I've included source which is a bit rushed and is based on the source in the original post

yours seems to work alot better than mine. I'ma have to peek at the source.
Must be some difference in how you draw, its really noticable in angry birds.

hellcat82 said:
Since I've already finished this I'll post it anyway, this detects the mouse by detecting which mouse input device is missing when unplugged against that of when it's plugged in (this doesn't always work with complex mouses). It also allows you to override and choose your device, in which case if it doesn't work stop the service and try another device.
I've included source which is a bit rushed and is based on the source in the original post
Click to expand...
Click to collapse
Works very nicely - thanks. Is there any way to control speed and add mouse pointers?

docfreed said:
Works very nicely - thanks. Is there any way to control speed and add mouse pointers?
Click to expand...
Click to collapse
Theoretically speed control wouldn't be too difficult. Just a case of multiplying the X and Y coord movements by a ratio.
Dynamically loading mouse pointers should also be possible but could take a bit more time to implement.
I'll try my hand at speed control if I get some time this week.

hellcat82 said:
Theoretically speed control wouldn't be too difficult. Just a case of multiplying the X and Y coord movements by a ratio.
Dynamically loading mouse pointers should also be possible but could take a bit more time to implement.
I'll try my hand at speed control if I get some time this week.
Click to expand...
Click to collapse
Great, look forward to your work. Also, would this work with a bluetooth mouse?

docfreed said:
Great, look forward to your work. Also, would this work with a bluetooth mouse?
Click to expand...
Click to collapse
I'm not sure if Android generates events for them (I don't have one to test on), but if it does, it's possible.

could a trackpad on a bluetooth keyboard work thanks to this patch?

This is working really well on my a500. Thanks a bunch! This along with my bt kybd and pocketcloud I leave the laptop at home for good.
Observations:
Lots of cursor lag when doing moderate to high level tasks
Cursor hidden behind the status bar, though clicking on the buttons still works.
Thanks again!!
Sent from my DROID2 using XDA App

great
brilliant !! it works well with usb keyboard and bluetooth mouse via usb hub

Got this working with a Microsoft wireless keyboard and mouse combo (1 rf receiver). Would be perfect for when hooking up the unit via HDMI to a home theater. Also I can confirm that it doesn't work with a Logitech G500 gaming mouse. lol didn't really expect that it would, but of course I had to try.

Related

Simple Programming Question embedded VC++

I'm currently doing some programs myself with the free MS embedded VC++.. and I'm finding it comfortable to do a simple dialog-based programs for PPC. I think I can have most of the background code going, and I've just got the GUI .. alright.
Now the question, how do I do a copy/paste to/from clipboard? I had most of the stuff done using the included MFC Wizard. I can get and send data to/from an EditBox (TextBox, whatever you call it). However, the click-hold thing on PPC doesn't seems to work on my EditBox, and hence I'm thinking what's needed to enable a simple Copy/Paste on an EditBox.
Currently, I'm using the simple
Code:
m_editBox = _T("the message I want to show");
UpdateData(FALSE); //send it to the EditBox
Any guide from here would be appreciated. However, I'm thinking there may not be an easy way to do that, hence I've also tried adding a 'Copy' and 'Paste' button to do the job, but I've tried things like
Code:
SetClipboardData(x, x)
GetClipboardData(x)
None works.
I have also tried
Code:
COleDataObject DataObject;
and with the handle etc etc .. but I can't seems to find this COleDataObject , is that in some other environment (e.g. not PPC env)?
Help
Fast solution:
http://www.pocketpcdn.com/articles/sip.html
(this shows/hides sip on get/lost focus in edit controls and add the context menu too)
and this is a simple example how to copy datas into clipboard
if(OpenClipboard(NULL))
{
EmptyClipboard();
HLOCAL clipbuffer = LocalAlloc(0, 100);
wcscpy((WCHAR*) clipbuffer, (WCHAR*) (vtNumber.bstrVal));
SetClipboardData(CF_UNICODETEXT, clipbuffer);
CloseClipboard();
free(szMsg);
LocalFree(clipbuffer);
}
I hope this help u
bye
Thanks for your respond.. things work.. a bit
Code:
//put a test char
char *test;
test = (char*) malloc(100);
strcpy(test, "blah blah blah");
//codes you've given
if(OpenClipboard()) //OpenClipboard(NULL) gives me error
{
EmptyClipboard();
HLOCAL clipbuffer = LocalAlloc(0, 100);
wcscpy((WCHAR*) clipbuffer, (WCHAR*) test);
SetClipboardData(CF_UNICODETEXT, clipbuffer);
CloseClipboard();
//free(szMsg); //not sure what 'szMsg' is
LocalFree(clipbuffer);
}
Things somewhat works. I'm not really sure which part I've got wrong. I'm suspecting some memory allocation is giving me problems. The thing is, if I were to use 'CF_UNICODETEXT' on the 'SetClipboardData(x,x)' line, I get something to paste on other programs (PPC Notes). BUT, the thing pasted is some funny stuff (e.g. letters that cannot be rendered, hence I get the little squares). If I were to use 'CF_TEXT', I don't seems to able to send my stuff to the clipboard or it made it invalid for (PPC Notes) pasting (e.g. I'm not able to paste it in PPC Notes).
Thanks.
BTW, if you are in the mood, can you give me a Paste function as well. Thanks a bunch.
Hi hanmin.
Odd I didn't notice this thread sooner.
Any way if you still having problems with this code here is the solution:
You are working with char and strcpy so your text is in ASCII (each letter one byte).
But you are calling SetClipboardData with CF_UNICODETEXT so it expects WCHAR (UNICODE) where each letter is two bytes.
The strange letters are the result of two consecutive bytes being interpreted as a single letter (probably lends you in Chinese or Japanese region of the Unicode table)
Windows mobile doesn't usually work with ASCII so the text you get from the edit box will already be in Unicode and won't give you any trouble.
The code should look like this:
Code:
//put a test char
CString test; //since you are working with MFC save your self the trouble of memory allocation
test = L"The text I want on clipboard"; //The L makes the string Unicode
//codes you've given
if(OpenClipboard()) //OpenClipboard(NULL) gives me error
{
EmptyClipboard();
//not sure why you need to copy it again, but here goes:
HLOCAL clipbuffer = LocalAlloc(0, test.GetLength() * 2); //remember: every letter 2 bytes long!
wcscpy((WCHAR*) clipbuffer, (WCHAR*)(LPCTSTR)test); //LPCTSTR is an overloaded operator for CString
SetClipboardData(CF_UNICODETEXT, clipbuffer);
CloseClipboard();
//szMsg probably belongs to some larger application and is irrelevant
LocalFree(clipbuffer);
}
I never used the clipboard APIs my self so I can't guide you farther but this code should work.
Hope this helps.
Wooo hooo.. Thanks levenum. I'm back on business!
You code works wonderfully.. just the final code "LocalFree(clipbuffer);" seems to cause problems. Without that, it works. I'm not sure if it will cause a memory leak.. but that's not much of my concern now
Now my Paste also works, and it seems that the magic code is the "LPCTSTR", which I have NO idea what it is (I'm more of a pure C person and.. a Java person ) Thanks again.
Glad I could help.
I am working from Ubuntu right know (Linux distro in case you didn't know) so I do not have access to my off-line MSDN files, but I recommend you check out the documentation for SetClipboardData.
It is possible it releases the memory it self so when you call LocalFree the handle is no longer valid.
That could also be the reason why you need to allocate memory instead of passing it the string directly.
As for LPCTSTR it is simple and not C++ related:
#define const* WCHAR LPCTSTR
Its M$ way of saying Long Pointer to Constant STRing
T changes meaning based on what you are working with:
If you work with ASCII TCHAR is char
If you work with Unicode TCHAR is WCHAR
Basically these are just all redefinitions of variable types so you can distinguish what they are used for.
In C++ you can overload operators. For example you can have a function which changes the way ++ works with certain types of variables.
In our case CString class has a function which determines what happens when you try to cast (convert) it to a pointer to string.
Thats all the "magi" code.
Good luck with your app.
Small update:
Since I had to go in to XP anyway (to change PDAMobiz ROM which kept hanging at random and didn't let me use BT to latest PDAViet which for now seem very good) I took a quick peek at the help files.
Here is why you should not release the memory:
After SetClipboardData is called, the system owns the object identified by the hMem parameter. The application can read the data, but must not free the handle or leave it locked. If the hMem parameter identifies a memory object, the object must have been allocated using the LocalAlloc function
Click to expand...
Click to collapse
levenum, thanks. You've got me almost there. There are several stuff I need to polish up though. Attach is a pre-mature version of what I wanted to do. There are several issues (including the fact that, only the 4 characters of the password are effectively used, which can be easily fix, I think. Just need to find the bug and squash it) that I like to polish up. They are sorted in order of importance (to me):
[1] Keyboard (SIP) pop up.
For this, I digged around and got to know that the function
"SHSipPreference( HWND, SIP_UP)" is the one to used. However, it never did what it suppose to do. I have had it put inside the "OnSetfocusConfirmPasswordEdit()" of the edit box, which should be called when it is set focus. I suspect that is I haven't set the HWND correctly. I have tried "NULL" and also tried using the "CWnd* pParent" from my dialog constructor (generated code my MFC Wizard). None of them worked.
[2] Editbox focusing.
For some reason, the focus on my main-dialog is correct on the editbox of the 'message'. However, on the dialog which is to confirm the password (which I called using
Code:
CConfirmPasswordDlg confirmPasswordDlg;
int nResponse = confirmPasswordDlg.DoModal();
is focusing on the 'Ok' button. What I like it to do is to focus on the 'confirmPasswordEdit' box, and it ought to automatically pop up the keyboard (SIP).
[3]Reduced size pop up dialog
I was trying to make the 2nd confirm password dialog smaller, something like a pop up in the PPC rather than something that take up the whole screen without much contents in it. How would you go about doing that? Is it not possible in PPC? E.g, if you were to use Total Commander, and start copying files around, they do have a pop up that does take up the entire screen. I'm suspecting I shouldn't do a "confirmPasswordDlg.DoModal()", and should some what do something myself. I have tried, SetVisible(1) thing, but that doesn't work. Or it shouldn't meant to work because my 1st screen is a dialog screen?
[4]Timer?
I would like to have a function of which after a certain period of idle time, it will clear off the clipboard and close itself. How would I go about doing this? Some sort of background thread thing?
Anyone can shine a light on my issues above? On MS-embedded Visual C++ (free), with Pocket PC 2003 SDK (free)
Attached the Blender-XXTea edition
Works on PPC2005 and WM5 (should work on WM6)
Does not require .NET framework
VERY small (54K)
hanmin said:
[2] Editbox focusing.
For some reason, the focus on my main-dialog is correct on the editbox of the 'message'. However, on the dialog which is to confirm the password (which I called using
Code:
CConfirmPasswordDlg confirmPasswordDlg;
int nResponse = confirmPasswordDlg.DoModal();
is focusing on the 'Ok' button. What I like it to do is to focus on the 'confirmPasswordEdit' box, and it ought to automatically pop up the keyboard (SIP).
Click to expand...
Click to collapse
In your CConfirmPasswordDlg::OnInitDialog handler, call GetDlgItem(confirmPasswordEdit).SetFocus() and return FALSE. That should handle the focus and possibly the SIP popup.
3waygeek said:
In your CConfirmPasswordDlg::OnInitDialog handler, call GetDlgItem(confirmPasswordEdit).SetFocus() and return FALSE. That should handle the focus and possibly the SIP popup.
Click to expand...
Click to collapse
HEY! The focus works! The working code is
Code:
((CWnd*) CConfirmPasswordDlg::GetDlgItem(IDC_CONFIRM_PASSWORD_EDIT))->SetFocus();
BTW, I'm wondering, whats the effect of a return TRUE/FALSE on a 'OnInitDialog()'?
Anyway, the keyboard pop up is still not working. I'm using the command
Code:
void CConfirmPasswordDlg::OnSetfocusConfirmPasswordEdit() {
SHSipPreference( (HWND)g_pParent, SIP_UP);//
}
which I suspect the 'g_pParent' is NULL. If it is NULL, would it work?
Ok, I haven't used MFC for a while and almost not at all on PPC but I will give this a shot:
1) MFC forces dialogs to be full-screen. Here is a detailed explanation on how to change that. Note that for some reason this will work only once if you use the same variable (object) to create the dialog several times.
If you use a local variable in say a button handler thats not a problem because the object is destroyed when you exit the function.
2) There is a simple SetTimer API. You can give it a window handle and then add an OnTimer message handler to that window. Or you could give it a separate function which will be called (say TimerProc). In that case you can give it NULL as window handle.
Note that CWnd objects have a member function with the same name (SetTimer) which sets the timer with that window handle (so that window will receive WM_TIMER message). If you want the raw API call ::SetTimer.
Also note that the timer will continue to send messages / call your special function every x milliseconds until you call KillTimer.
3) I am not sure what the problem with the SIP is. CWnd and derived classes like CDialog have a function called GetSafeHwnd (or GetSafeHandle, I don't remember exact name). Try passing that to SHSipPreference.
If that does not work here is an article with an alternate solution.
WOHO!! Everything works NOW!!.. MUAHAHHAHA.. wait til you see my release version
Non maximized windows works using the code suggested at the page. Although I still do not understand where the heck this '.m_bFullScreen' property came from. It is not anywhere obvious to be seen.
Code:
CNfsDlg dlg;
dlg.m_bFullScreen = FALSE;
dlg.DoModal();
Timer works using the
Code:
xx{
//your code...
CBlenderDlg::SetTimer(1, 5000, 0); //event 1, 5 seconds, something
//your code...
}
void CBlenderDlg::OnTimer(UINT nIDEvent){
//do something here for the timer
}
although somehow, the OnTimer() only works if I went to the MFC class wizard to add the WM_TIMER function. Doesn't work when I add in the OnTimer() myself. Must be something else that I've missed. Anyway.
Keyboard issue solved using
Code:
SHSipPreference( CBlenderDlg::GetSafeHwnd(), SIP_UP);
Glad its working out for you.
Couple of comments:
1) Somewhere at the top of the cpp file, if I am not mistaking there is something called a message map. It's a bunch of macros that lets MFC know what window messages it handles. An entry there is what was missing when you added the function manually.
2) m_bFullScreen is just another among many undocumented features. M$ likes to keep developers in the dark. For instance WM 2003 and up have an API called SHLoadImage which can load bmp, gif, jpg and some other formats and return HBITMAP which all the usual GDI functions use.
This API was undocumented until WM 5 came out and even then they said it only works for gif...
hanmin said:
BTW, I'm wondering, whats the effect of a return TRUE/FALSE on a 'OnInitDialog()'?
Click to expand...
Click to collapse
The return value indicates whether or not the standard dialog handler (which calls your OnInitDialog) should handle setting the focus. As a rule, OnInitDialog should return TRUE, unless you change the focus within the handler (or you're doing an OCX property page on big Windows).
I haven't done much WinMob/CE development -- I've been doing big Windows for 15+ years, so window message handling is pretty much second nature. I started doing Windows development back in the days when you didn't have C++ or MFC boilerplate; you had to implement your own DialogProc, crack the messages yourself, etc. It's a lot easier now.
CommandBar / MenuBar
I'm back.. with more questions
Not much of a major issue, but rather an annoying thing I've found. Probably that's what evc/mfc/m$ intended to do that.
Anyway, I'm starting my way of getting around CommandBar. I created a MFC skeleton, studied the code, and that's what I've found, after I've created a CommandBar/MenuBar on evc and putting it in
Code:
if(!m_mainMenuBar.Create(this) ||
!m_mainMenuBar.InsertMenuBar(IDR_MAINMENUBAR) ||
!m_mainMenuBar.AddAdornments() ||
!m_mainMenuBar.LoadToolBar(IDR_MAINMENUBAR))
{
TRACE0("Failed to create IDR_MAINMENUBAR\n");
return -1; // fail to create
}
where I have the variable 'CCeCommandBar m_mainMenuBar' and I have created a MenuBar on evc with the Id 'IDR_MAINMENUBAR'. The menu bar works flawlessly on my dialog based application, when I have the 1st level as a pop up. Example
MenuBar --> 'File' --> 'New', 'Save'
Where 'File' is a folder-like thing that pop-ups and show the contents (i.e. in this example, 'New', and 'Save'). However, given the SAME code to load the CommandBar/MenuBar, it will not work, if I were to put the actual command at 1st level. Example, this will not work
MenuBar -> 'New', 'Save'
where there isn't any folder-like pop-up to store the commands 'New', and 'Save'.
I know that I can have buttons for these commands, and probably works. But, what I'm trying to do is to utilize the bottom-left-right softkey in WM5/6. If I were to have the 'File'->'New','Save' structure, it works fine with WM5, showing it as a softkey. But, if I were to do just 'New','Save' it will not show up in both WM2003 emulator and WM5.
As a matter of fact, even if I have (say) File->New,Load, and I added a new command (i.e. not folder-like-pop-up), example 'Help' on the CommandBar/MenuBar, the File->New,Load will not show up too. It seems like the 1st level command (ie. without a folder-pop-up), causes some problems and stop it from loading further.
Guys, ring any bell?
two bytes more
levenum said:
Hi hanmin.
Odd I didn't notice this thread sooner.
Any way if you still having problems with this code here is the solution:
You are working with char and strcpy so your text is in ASCII (each letter one byte).
But you are calling SetClipboardData with CF_UNICODETEXT so it expects WCHAR (UNICODE) where each letter is two bytes.
The strange letters are the result of two consecutive bytes being interpreted as a single letter (probably lends you in Chinese or Japanese region of the Unicode table)
Windows mobile doesn't usually work with ASCII so the text you get from the edit box will already be in Unicode and won't give you any trouble.
The code should look like this:
Code:
//put a test char
CString test; //since you are working with MFC save your self the trouble of memory allocation
test = L"The text I want on clipboard"; //The L makes the string Unicode
//codes you've given
if(OpenClipboard()) //OpenClipboard(NULL) gives me error
{
EmptyClipboard();
//not sure why you need to copy it again, but here goes:
HLOCAL clipbuffer = LocalAlloc(0, test.GetLength() * 2); //remember: every letter 2 bytes long!
wcscpy((WCHAR*) clipbuffer, (WCHAR*)(LPCTSTR)test); //LPCTSTR is an overloaded operator for CString
SetClipboardData(CF_UNICODETEXT, clipbuffer);
CloseClipboard();
//szMsg probably belongs to some larger application and is irrelevant
LocalFree(clipbuffer);
}
I never used the clipboard APIs my self so I can't guide you farther but this code should work.
Hope this helps.
Click to expand...
Click to collapse
I know it is a bit late! But there is a mistake the code snippet:
HLOCAL clipbuffer = LocalAlloc(0, test.GetLength() * 2); //remember: every letter 2 bytes long!
needs to be
HLOCAL clipbuffer = LocalAlloc(0, test.GetLength() * 2+2);
the terminating 0 is als 2 bytes long!
Those errors are sometimes fatal, with very less chance to find them!
ms64o

Interest in a task automater / macro recorder and playback utility?

I wrote a program for Windows 2000/XP called Do It Again ( http://www.spacetornado.com/DoItAgain/ ) that does simple task automation (also called macro recording and playback).
You click "Create a New Task" and it then records all of your keystrokes and mouse clicks in any program until you hit a certain hot key (Scroll Lock by default). You can then run the task back in a number of different ways and it will recreate your keystrokes and mouse clicks... either with the same pauses in between, or it can speed them up. And there are some other options available as well (repeating tasks, manually extending pauses between actions, etc).
I thought something like this might be useful for Windows Mobile devices. I wrote Do It Again in C# using .NET Framework 2.0, so I'm guessing it should be fairly easy to port to .NET Compact Framework 2.0.
But I only want to start porting this to Windows Mobile if there is any interest in it, and if people think they might get some use out of it. I know for me personally there are several things I do on my WinMo device that get repetitive and monotonous... starting and minimizing GPS Test to activate the GPS adapter, dismissing old Notifications, marking all email as Read, etc.
This sounds like a perfect application which can be used for pretty much anything.
Personally, i'm very interested in this, and would be glad to have it ported.
There are a lot of people who're looking to do things the easy way around here, for instance, settings up accounts, changing settings.. and so forth, without having to go 8 clicks...
nir36 said:
This sounds like a perfect application which can be used for pretty much anything.
Personally, i'm very interested in this, and would be glad to have it ported.
There are a lot of people who're looking to do things the easy way around here, for instance, settings up accounts, changing settings.. and so forth, without having to go 8 clicks...
Click to expand...
Click to collapse
I would find this very useful for using both Google Maps and Tom Tom Nav.. Say you want to go somewhere new.. Use google maps to find the address and then activate the script, prime the GPS, load Tom Tom, then copy paste the address info into Tom Tom. One cycle activated by a hot key would be nice.
norkoastal said:
I would find this very useful for using both Google Maps and Tom Tom Nav.. Say you want to go somewhere new.. Use google maps to find the address and then activate the script, prime the GPS, load Tom Tom, then copy paste the address info into Tom Tom. One cycle activated by a hot key would be nice.
Click to expand...
Click to collapse
Yeah that would be a perfect example of how you could use it. And that would be a lot safer to have a task automation app perform those steps for you instead of having to do it yourself while driving.
I'll start working on this in a week or so (as soon as I can get Scrobble finished up and to a point where people stop requesting more features! ) and see if can be ported over.
It should be easy; there are just a couple Win32 library calls it makes that I'm not sure are available in WinMo 6.1:
Code:
// imports mouse_event function from user32.dll
[SRI.DllImport("user32.dll", CharSet = SRI.CharSet.Auto, CallingConvention = SRI.CallingConvention.StdCall)]
public static extern void mouse_event(int dwFlags, int dx, int dy, uint cButtons, uint dwExtraInfo);
// imports keybd_event function from user32.dll
[SRI.DllImport("user32.dll", CharSet = SRI.CharSet.Auto, CallingConvention = SRI.CallingConvention.StdCall)]
public static extern void keybd_event(byte bVk, byte bScan, uint dwFlags, uint dwExtraInfo);
These two functions mouse_event() and keybd_event() poll for all mouse and keyboard events (clicks, movement, keypresses, etc.) globally... in any window, program, and so on.
Does anyone know off the top of their head if user32.dll is also available in Windows Mobile 6.1 Professional?
I am *VERY* interested in this application. How is the porting coming? I can't wait!!!
Man. That would be a revolution for us ppccusumizenerds! Holy ****! ..that would actually be Holy ****!
There is a version of AutoHotKey for Pocket PCs / WinCE, still in development, but a lot of it working, see here http://www.autohotkey.com/forum/topic27146.html&highlight=wince.
I'm very interested. In fact, I was looking for such an application but gave up finding.
Looking forward.
Thanks.
Hi,
if it's getting as good as Scrobble, then I vote for it . This would be a very handy feature, moreover very clever.
I would test it with pleasure.
Greets.
It will be a very useful app!
Looking forward to it, in the meanwhile I'll play with XP version!!

[Q] Multi-Touch support on Nook Simple Touch?

Ok, I rooted my Nook Simple Touch (Running 1.1) using:
http://forum.xda-developers.com/showthread.php?t=1351719
Many things aren't working as expected (e.g. The Android Market doesn't work), but in reading the thread it sounds like that might be a common problem. I do have root access, and can connect via ADB and side-load apps.
I'm looking to do pinch-zoom on PDF documents, but I can't seem to get it to work in Aldiko, and I'm not sure if that's because pinch-zoom isn't supported, or I'm doing something wrong.
There appears to be conflicting information on multi-touch support on the Nook Simple Touch on the forums.
Can anyone clarify whether this support is present, and if so, maybe suggest a good PDF app that enables it?
Well, on version 1.0.0 and 1.0.1 there was no support for multitouch. When we read the announcement for 1.1.0, we saw that multitouch would be enabled, but nobody has seen a trace of that yet.
I talked to some guys in here, and asked for info on where it can be enabled (it is disabled when rooting).
I downloaded a midi controller(an app that shows a bunch of sliders on the screen and communicates its position to a pc) and I noticed I could move a vertical slider and another horizontal one at the same time... but then google maps didnt react to my pinching
I have tried with Multitouch Test ( https://market.android.com/details?...DEsImRlLmdyZWVucm9ib3QubXVsdGl0b3VjaHRlc3QiXQ..) and it looks like the hw is capable of multitouch!
I don't know how to enable it, though
I've tried to enable it by using this http://nookdevs.com/NookColor_Enable_MultiTouch
but Multitouch tester only detects one input....
Although if I used two fingers the second registered movement intermittently, jumping from on to another instead of two solid points.
You can try this:
Unpack this file to where adb is (unless you have adb in your path)
Then in cmd:
Code:
adb shell
mount -o remount,rw -t ext2 /dev/block/mmcblk0p5 /system
exit
adb push android.hardware.touchscreen.multitouch.distinct.xml /system/etc/permissions/android.hardware.touchscreen.multitouch.distinct.xml
A reboot is probably needed
Thanks, going to try it right now.
---------- Post added at 10:13 PM ---------- Previous post was at 10:03 PM ----------
Same behaviour, mmmmmm, are we sure they enabled multitouch?
Anyway thanks for the help
eded333 said:
Thanks, going to try it right now.
---------- Post added at 10:13 PM ---------- Previous post was at 10:03 PM ----------
Same behaviour, mmmmmm, are we sure they enabled multitouch?
Anyway thanks for the help
Click to expand...
Click to collapse
Well:
Code:
include/linux/zforce.h:#define ZF_NUM_FINGER_SUPPORT 2
..
drivers/input/touchscreen/zforce.c: #define ZF_SETCONFIG_DUALTOUCH 0x00000001
I'd say yes
ros87 said:
Well:
Code:
include/linux/zforce.h:#define ZF_NUM_FINGER_SUPPORT 2
..
drivers/input/touchscreen/zforce.c: #define ZF_SETCONFIG_DUALTOUCH 0x00000001
I'd say yes
Click to expand...
Click to collapse
Then something else is missing
eded333 said:
Then something else is missing
Click to expand...
Click to collapse
Very likely.
Maybe some kernel flag .. I dunno .. but I'm sure Google can tell you how that chain is supposed to be set up
eded333 said:
I've tried to enable it by using this http://nookdevs.com/NookColor_Enable_MultiTouch
but Multitouch tester only detects one input....
Although if I used two fingers the second registered movement intermittently, jumping from on to another instead of two solid points.
Click to expand...
Click to collapse
Which Tester did you use? I could see it work only with the one I linked, from greenrobot...
I have also tried to enable moultitouch per Nookdevs instruction, but had no effect
met67 said:
Which Tester did you use? I could see it work only with the one I linked, from greenrobot...
I have also tried to enable moultitouch per Nookdevs instruction, but had no effect
Click to expand...
Click to collapse
I used multitouch tester, but that one gives the same output as the one I used, it dosnt really show two points, it goes intermitently but fast fom one to another, if you check the number of pointers, its allways 1.
I tried both XML's and I'm getting the same results as everyone else.
The multitest app doesn't detect the multitouch (not even periodically) for me.
Arg, I was hoping this was going to be an easy question :-(.
Well, what is 100% certain is that the driver is building the data needed for dualtouch.
Code:
int touchdata_collect( u8* payload )
{
u8 i;
reported_finger_count = payload[0];
framecounter++;
if( reported_finger_count > ZF_NUM_FINGER_SUPPORT )
{
zforce_error("Detected (%d) more fingers the max(%d) number supported\n",
reported_finger_count, ZF_NUM_FINGER_SUPPORT );
return -EINVAL ;
}
for( i=0; i< reported_finger_count; i++ )
{
tinfo[i].x = (u16)((payload[2+i*ZF_COORDATA_SIZE]<<8)|
payload[1+i*ZF_COORDATA_SIZE]);
tinfo[i].y = (u16)((payload[4+i*ZF_COORDATA_SIZE]<<8)|
payload[3+i*ZF_COORDATA_SIZE]);
tinfo[i].id = (u8)((payload[5+i*ZF_COORDATA_SIZE]&0x3C)>>2);
tinfo[i].state = (u8)((payload[5+i*ZF_COORDATA_SIZE]&0xC0)>>6);
tinfo[i].rsvrd = (u8)( payload[6+i*ZF_COORDATA_SIZE] );
tinfo[i].prblty = (u8)( payload[7+i*ZF_COORDATA_SIZE] );
tinfo[i].valid = 1;
tinfo[i].z = reported_finger_count == 0 ? 0 : 20;
}
return reported_finger_count;
}
How that data is used further on in the chain is beyond my knowledge, but the data is there.
This is a somewhat absurd situation as usually the lack of dual/multitouch are on hardware/driver level
ros87 said:
Well, what is 100% certain is that the driver is building the data needed for dualtouch.
Code:
int touchdata_collect( u8* payload )
{
u8 i;
reported_finger_count = payload[0];
framecounter++;
if( reported_finger_count > ZF_NUM_FINGER_SUPPORT )
{
zforce_error("Detected (%d) more fingers the max(%d) number supported\n",
reported_finger_count, ZF_NUM_FINGER_SUPPORT );
return -EINVAL ;
}
for( i=0; i< reported_finger_count; i++ )
{
tinfo[i].x = (u16)((payload[2+i*ZF_COORDATA_SIZE]<<8)|
payload[1+i*ZF_COORDATA_SIZE]);
tinfo[i].y = (u16)((payload[4+i*ZF_COORDATA_SIZE]<<8)|
payload[3+i*ZF_COORDATA_SIZE]);
tinfo[i].id = (u8)((payload[5+i*ZF_COORDATA_SIZE]&0x3C)>>2);
tinfo[i].state = (u8)((payload[5+i*ZF_COORDATA_SIZE]&0xC0)>>6);
tinfo[i].rsvrd = (u8)( payload[6+i*ZF_COORDATA_SIZE] );
tinfo[i].prblty = (u8)( payload[7+i*ZF_COORDATA_SIZE] );
tinfo[i].valid = 1;
tinfo[i].z = reported_finger_count == 0 ? 0 : 20;
}
return reported_finger_count;
}
How that data is used further on in the chain is beyond my knowledge, but the data is there.
This is a somewhat absurd situation as usually the lack of dual/multitouch are on hardware/driver level
Click to expand...
Click to collapse
That is VERY interesting.
Is it possible that its something simple like the permissions XML file has a different name with the Simple Touch?
I'm monitoring this thread. Sure hope some multi-touch function can be gotten out of the NST
I tried the xml file from the Sony PRS-T1 as well (which apparently has multitouch), but still no luck.
Did not look so much different, though.
I uploaded the files from the Sony reader here, in case someone wants to look into them: mediafire.com/?vuv6v4y4yd44o97
(Won't let me post a clickable link, sorry)
i hope to see Multitouch working well on nst
Market
Install the "SearchMarket" app. And the Android Market will work just fine!
I may be wrong, but isn't touch screen tech infrared on Nst? if so multitouch is not possible.

LiveView reverse-engineering effort

Hi all,
A few weeks ago I started taking apart the LiveView software and manager. I'm really unhappy with the current plugin system, the menu structure and more. So, I started to reverse-engineer the Bluetooth protocol. I'm at the very beginning but it's looking promising.
Here's the repo: https://github.com/BurntBrunch/LivelierView
The protocol is not very difficult - just request-acknowledge-response serial communication over RFCOMM. Also, the kind people from SE didn't run the manager through Proguard (wink, wink, nudge, nudge ).
I also have what I *think* is a dump of the firmware but it seems either compressed or encrypted. Binwalk didn't find anything in it. If someone would be kind enough to take apart the software updater, we might figure out what's running on the actual device as well.
Overall, I'm just starting but so far it's looking good (got time syncing working! it's at least a watch, if nothing else! ).
Any help would be greatly appreciated (pull requests are more than welcome! )
thinking of doing something similar with one of my gadgets.
What did you use to reverse-engineer the Bluetooth protocol, just wireshark and a bluetooth dongle
Neither Did it from disassembly of the manager - much easier than sniffing and guessing.
If you don't have that option and said gadget connects to an Android phone, put on a decent ROM with the full BlueZ stack (e.g., Cyanogen) and use hcidump. It's really, really useful!
Come to think of it, Wireshark might be good enough - the only thing I found useful about hcidump was the SCO audio dump.
Nice effort. I've already forked your work on github, might have a look at it soon, I got some geeky ideas for myself as well, and I think integrating this functionality natively on CyanogenMod or even a custom app to replace the SE's one would be great to have as well.
Nice,
i'm was disapointed by the liveview manager myself, i hope something good emerges from your work
I've also decompiled the APK, and it seems that everything that displays on screen comes from the application, which means everything could be costumized. Seems like SE is using a PNG lib LodePNG to convert images and pushing them to the phone. Also, when it comes to strings, I've found some useful references in JerryProtocol that might indicate how the correct text encoding (not that we can push it right now, but just for the record):
Code:
private static final String mEncoding = "iso-8859-1";
private static final char cCarriageReturn = '\r';
private static final char cLineFeed = '\n';
Controlling the led seems quite simple to, it seems message's data is divided in 3 parts:
[RGB] [DELAY = Integer Number] [ON STATE = 0|1]
[old]although I've not figured out the ID of the LED control yet[/old].
LED request ID is 40 and LED response ID is 41. Hope this is enough for you to get started on that one too
I've not yet tested the app, but I've read your code and gave a shot at decompiling trying to see what I could dig up, will try it later (not very used to running python scripts though, will have to see how to install pyserial first and all that)
pedrodh said:
it seems that everything that displays on screen comes from the application
Click to expand...
Click to collapse
Yup, the main stuff is on the phone - the state machine is clearly isolated (on a side note, the manager is rather well-written, thankfully). On the other hand, I'm somewhat confused by all the constants - it almost feels as if the device has native navigation or icon cache or something.
pedrodh said:
Controlling the led seems quite simple to, it seems message's data is divided in 3 parts:
[RGB] [DELAY = Integer Number] [ON STATE = 0|1]
LED request ID is 40 and LED response ID is 41. Hope this is enough for you to get started on that one too
Click to expand...
Click to collapse
Thanks for the interest and the tip, I'll look into it soon - I need to figure out a good way to send commands from stdin. It seems that I'll need to figure out non-blocking reading in Python anyway (good news for you - I might drop pyserial! )
In any case, I'll add it to protocol.txt, unless you beat me to it!
Lastly, the only reason it's in Python is 'cause I'm productive in it *and* it has good, fast bindings (I try to stay away from gobject in C!). Whatever comes out of this effort would be running on the phone, surely
Edit: You *did* beat me to it!
Edit: Implemented LED, vibration, and a pretty good scheme for sending commands from the CLI
Nice work, saw quite a few commits in a small amount of time.
I've not yet been able to run it sucefully, I (think) have installed pyserial correctly, but maybe the problem is that the bluez that comes with my ubuntu is somewhat newer than the one you used, anyway here's as far as I got http://pastebin.com/uVRdr5T3 if you by chance know just by looking at it what it is would be great .
I've started an Android applicatoin Project in hopes of porting this to an Android application as well, but I'm somewhat new to Bluetooth handling on Android, still working it out. I'm already able to connect and pair with device (noob stuff), but it fails to READ from it. I've used java's DataOutputStream and DataInputStream since they deal with data in a big-endian notation, but I haven't understood yet how the initialization process goes. I've looked to your code, I get some parts but not the whole thing yet. Do you have to wait for the LiveView to tell something back, or you can just start to send commands at random? Also, does the script act as a bluetooth server or client (it seems that they are distinct when coding in Android, I've choosen to Connect as a Client, and yes I used the same UUID that you got from decompiling so at least that part I guess to be correct) ?
Anyway is just a bunch of very ugly code at the moment, after I get it to do something usefull I'll clean up the project and host it on github as well.
Hmm, that error is rather suspicious. Looking at the docs, Connect() is not even supposed to throw org.bluez.Failed, let alone with that message. And service discovery supposedly finished successfully..
Was the device in pairing mode (with the arrows/circle turning)? Was the computer the last thing it paired with (once you pair with the computer, the phone shouldn't be able to connect to it, since the device only remembers the last authorization)?
Install d-feet, the DBus browser, go under System bus, org.bluez, find the device, verify that it has the org.bluez.Serial interface and try calling Connect() with the proper UUID from there. Other than that, I've really no idea what it's on about.. Do you have more than one LiveView device by any chance (weird things might happen then)?
I don't actually think it's the difference in bluez versions (the Serial interface hasn't changed in the past 2 and something years) but it might be a (driver) bug you're hitting. I *think* I'm doing everything right as far as communication with BlueZ is concerned. Try running `hciconfig hci0 reset`.
Sorry I couldn't be more helpful..
Regarding your Java effort, if I recall my Bluetooth terminology correctly, you are a client, since the server is the thing advertising the service. You should *not* be reading immediately from the device. The phone/computer sends the first message - in my case, my first message is always STANDBY. Then and only then can you start reading back.
Lastly, I hope Android abstracts the whole RFCOMM pipe thing, 'cause it's a pain to use (and the reason I still need pyserial) - select() would sporadically tell me it has data to read and when I try to read it, I get ERRIO :/ I suspect RTS triggers select()..
Make sure you're only reading as many bytes as you know are in the next packet (take a look at consume() - it returns the number of bytes it expects next) and not more than that - it would either block or throw an exception. I've not done any Bluetooth work on Android, so that's as much as I can help, I'm afraid.
Lastly, as big as the temptation is, do not under any circumstances reuse code from the official manager. "Sony" is in the name of the company after all. I'm half-expecting a Cease & Desist any moment now
Edit: Implemented Display Properties Request and Clear Display Request (doesn't do anything). I think I'm out of low-hanging fruit
Really interesting work, guys. The Liveview is a fantastic idea and is almost brilliant - if only it worked properly! If you could get the basics working properly so we don't have to use the Sony software that would be fantastic, it's got so much potential.
Cheers,
Tim
So, I had a brilliant idea today. You know how the LiveView Manager app is full of debug messages. Turns out, they are disabled by means of a constant in ElaineUtils. My idea was to change that constant, put the apk back on my phone and rejoice from all the extra info I'd have.
Turns out, that's not how it works. I changed the constant (bumped it to 0x100 - literally a single bit change) and re-signed the apk. I got some output out of it but not all, and none of the useful ELEMENT_ID_* messages
Any help on that front would massively speed up the reverse-engineering effort.
EDIT: Scratch that, I'm stupid. I forgot that the .field annotations are not executable code - I was changing the wrong bit so to speak. Changed the value in <cinit> and voila, proper logcat!
EDIT: Here's some food for thought - http://pastebin.ca/2099804 - it's the log from startup + a bit of moving around and opening/closing the mediaplayer control.
Very cool project.
I believe, for the damn thing to be usable, focusing on improving Bluetooth performance would be quite good. By "performance" I mean "power consumption." Having to give up on the watch after two hours of light use is really unacceptable.
I would love it if you got this thing working efficiently like SmartWatchm/OpenWatch did for my MBW-150. I ordered my LiveView from the UK when it first released there instead of waiting for the US release. The darn thing disappointed the hell out of me and has been sitting in my garage for almost a year now.
Hopefully you get something going on with this.
archivator said:
So, I had a brilliant idea today. You know how the LiveView Manager app is full of debug messages. Turns out, they are disabled by means of a constant in ElaineUtils. My idea was to change that constant, put the apk back on my phone and rejoice from all the extra info I'd have.
Turns out, that's not how it works. I changed the constant (bumped it to 0x100 - literally a single bit change) and re-signed the apk. I got some output out of it but not all, and none of the useful ELEMENT_ID_* messages
Any help on that front would massively speed up the reverse-engineering effort.
EDIT: Scratch that, I'm stupid. I forgot that the .field annotations are not executable code - I was changing the wrong bit so to speak. Changed the value in <cinit> and voila, proper logcat!
EDIT: Here's some food for thought - http://pastebin.ca/2099804 - it's the log from startup + a bit of moving around and opening/closing the mediaplayer control.
Click to expand...
Click to collapse
Wow, that's very useful thank you. I've been very occupied and did not work more with the Android Side application since my last post, I intend to return to it soon enough though, that output is very welcome when it comes to understanding then the icons are sent and the whole mechanism itself.
I've been doing a bit of reverse engineering work on the liveview as well, and I think I have a complete (although i fear possibly slightly corrupt) firmware dump!
I have been able to extract was some PNG images from the firmware (Thanks to their rather distinctive %PNG Header and ending with IEND).
It would appear that the menus and stuff are in fact definitively transferred over bluetooth!
I've attached the images I've extracted if anyone's interested in seeing them!
I'm currently trying to work through it in IDA to disassemble it, which is a pain in the arse!
Is anyone else also interested in completely rewriting the firmware?
@aj256, nice work! I thought I had a dump as well but mine looked compressed :\ Mind uploading yours somewhere for all to see? (edit: sorry, saw it in the archive)
aj256 said:
It would appear that the menus and stuff are in fact definitively transferred over bluetooth!
Click to expand...
Click to collapse
That's correct - I almost have that part of the protocol figured out but I'm low on spare time.
aj256 said:
Is anyone else also interested in completely rewriting the firmware?
Click to expand...
Click to collapse
Well.. I'd be interested in modifying it and isolating the Bluetooth stack but don't really have the time OR the chops to write the whole firmware from datasheets and disassembly.
As for where I'm standing, I know what I need to decompile next (renderShowUi) but it's a couple of thousand lines of smali. There are so many branches, it's easy to get lost. I need to write better tools for decompiling smali first
Just bought a Live View! I know it may not be the best but I got it cheap and mainly want the Caller ID portion of it. I hope this reverse engineering pays off. Once I get mine I may start poking around and see if I can help out! Thanks for the post OP!
Hi,
do you guys have some irc channel or anything else? Just got my LiveView and want to help you with this...
I've quickly put together a project website at openliveview (dot) com (apparently I don't have enough posts for an external link!) with some forums as well to help to document peoples progress!
I've done a quick writeup on my progress so far (which isn't very much!)
@archivator, glad you found the firmware in the zip, I was just about to reply that it was there!
aj256 said:
I've quickly put together a project website at openliveview (dot) com (apparently I don't have enough posts for an external link!) with some forums as well to help to document peoples progress!
I've done a quick writeup on my progress so far (which isn't very much!)
@archivator, glad you found the firmware in the zip, I was just about to reply that it was there!
Click to expand...
Click to collapse
Nice. I've been on your website and the documentation is getting in good shape. When I got some free time I'll try and read it more carefully and complement the Android project.
Talking about that, I've uploaded my progress so far to github: https://github.com/pedronveloso/OpenLiveView
bare in mind that apart from pairing with the Device not much is actually working by now, contributions are welcome of course

Developing Auto-DisableRefresh-When-Dragging App [New Video Added]

I know some people have already discussed this in the noRefresh App thread.
But I got a new idea
I don't wanna create a module and embed that to applications
Instead, I created a new Input Device in the kernel
After that, I set the device ( dev/input/event3 ) permission to allow read / write for others.
Then, write an app using JNI to catch the Input Device and set the mode of refreshing.
The prototype proved that this idea works, as i worked out it already, but not very perfect, still in experiment
I am just in some problems of threading, as I am not a professional programmer
This is wt i have now:
http://www.youtube.com/watch?v=GkFyvRR6In8
NEW!! : http://www.youtube.com/watch?v=cucG03rg3tg
I know that my method is a bit dirty,
but at least it works XD
I hope that someone would like to help
Upload those code soon
By the way, why can't i change content of init.rc ?
It removes my changes after reboot..... How to solve?
wheilitjohnny said:
By the way, why can't i change content of init.rc ?
It removes my changes after reboot..... How to solve?
Click to expand...
Click to collapse
You need to modify uRamdisk to change init.rc.
Would you like to tell me how to unpack the uRamdisk? I use both Windows and Ubuntu, any methods on these two platform is ok. Thanks!
Try the script from this post http://forum.xda-developers.com/showpost.php?p=24135886&postcount=72
This is simply amazing. Since you already have it working, polishing it shouldn't take too long (I think, I'm still a beginner programmer).
I don't think /dev/input is a good place to park that.
There are any number of things that could show up there and affect order.
Why not put it in its own directory?
Maybe I'm missing something, but if we use your method you still expect applications to have to be modified to read/write from event3 to control display mode?
As it is now, you only need a single function call to switch display modes. Yes, there is a little bit of housework to do before that, but what could be simpler than a single function?
I think no place else is more suitable than /dev/input.
As /dev/input is the linux kernel's input system.
In the driver, I would only use the input report system but not use the I/O system.
Maybe even setup a input/event searching function to solve the problem later.
The case now is that, the touch screen originally just provide event2,
but if we need to extract move and fingerup information on upper level, it may waste some time.
In order to make the whole algorithm easier and faster, I added 1 more event in the zForce driver,
to only output FingerMove and FingerUp state.
As the reporting system is starting from the kernel, this hack would need to change uImage + uRamdisk + Add an App. The project is quite huge.
My final target is to make the application remembering your choice on what mode u need for the focusing application.
AlwaysOn / OnlyWhenDragging / AlwaysOff, so on.
Of coz the click-to-call function still work.
Isn't it a better workflow and more intuitive, isn't it the thing that we would expect?
I already have an dev/input/event3 on my system and sometimes event4, 5, 6.
That's why I don't think that whatever it is you are doing belongs there.
Is the point to allow coders for user applications to interface with your driver?
Is this supposed to work without modifying user applications?
What information would be going in/out of event3?
Clearly I am missing something here.
Actually not necessary to be event3, the system will auto-ly create a new eventx, just in normal case, without any extra USB devices, a nook should only have up to event2. So, I use event3 as an example here.
We can later make the program auto-ly search back which is the needed eventx.
Yes, u r right, we no need to modify any other applications and this is exactly the point y i am creating this!
Anyone know how to call a FullScreen Refresh in a service?
always on?
Look really great. Since blinking after scrolling is incomfortable is it possible to have also "always on " mode using your new ideas?
My final target is to make the application remembering your choice on what mode u need for the focusing application.
AlwaysOn / OnlyWhenDragging / AlwaysOff, so on.
Of coz the click-to-call function still work.
Also an Over-Ride mode, for more flexible using.
But the most basic part need to be handled well now, as it is not very perfect now.
I still don't figure out how to call the e-ink driver to refresh the screen =-=
wheilitjohnny said:
I still don't figure out how to call the e-ink driver to refresh the screen =-=
Click to expand...
Click to collapse
Have you tried
Code:
echo 1 > /sys/class/graphics/fb0/epd_refresh
?
wheilitjohnny said:
Yes, u r right, we no need to modify any other applications and this is exactly the point y i am creating this!
Click to expand...
Click to collapse
Are you saying that your idea does not require modifying user applications?
If it doesn't, then there is no need to have a public interface.
It will be only your code talking to your code.
What is the point of this /dev/input/event3? You say that it will be writable. What's going in and out?
Some apps will be using gestures, some dragging. How are you going to keep track of that all?
I have one application that works perfectly fine now, one activity uses swipe gestures to page up/down while another activity uses drag with a user choice of A2 and display while dragging or else only panning at ACTION_UP.
All this requires less than 10 lines of code.
With multitouch, many applications don't even need A2. Even normal panning in Opera Mobile works much better now that Opera doesn't try to display while panning.
Maybe my english is too bad, cannot express the idea well.
I know, we can make such an application with noRefreshDrag working on its own well.
But how about other applications, it is impossible for us to change all applications.
So, my idea is making it system based.
My prototype is very good now, after several adjustment.
Not limited to only 1 application, but the whole system.
The approach is like this:
1. zForce driver provide extra information to InputEvent
2. A JNI catch the InputEvent
3. A service get the data and set the update mode
We only need to write 1 application to handle the setting of this chain.
This is what i mean, hope that u get what i mean now.
mali100 said:
Have you tried
Code:
echo 1 > /sys/class/graphics/fb0/epd_refresh
?
Click to expand...
Click to collapse
Let me try it now!
wow, it works great
wheilitjohnny said:
zForce driver provide extra information to InputEvent
Click to expand...
Click to collapse
I guess that this is the part that I don't understand.
What is this extra information?
Renate NST said:
I guess that this is the part that I don't understand.
What is this extra information?
Click to expand...
Click to collapse
It created an extra Event, l called it Event3 before.
Driver reports only move and finger_up to Event3.
Just providing a channel to pass an information from driver to user-space.
You may ask why not directly using the existing Event, need to create another one.
That is because, the original one only have touch and position information, parsing them back to move information need a bit of work. As the hardware provided the move information, then don't waste it.
Code:
public boolean dispatchTouchEvent(MotionEvent event)
{
switch(event.getAction())
{
case MotionEvent.ACTION_DOWN:
break;
case MotionEvent.ACTION_MOVE:
break;
case MotionEvent.ACTION_UP:
break;
}
return(true);
}
}
Isn't that everything that you could ask for?

Categories

Resources