TCP_NODELAY - send bytes without delay - Networking

I have next problem in developing app for android (Android studio, 4.0.3 kernel),
I communicate with my device over wifi. I need send from ever 20ms some bytes to my device. But I cannot send data more quickly like 100ms! When I use timer with 20ms period, system will send data in 100ms together. Setting timer to 200ms works, data sent ever 200ms. But In decreasing period to 20ms.
This is part of code:
// initialization of socket:
SocketAddress sockaddr = new InetSocketAddress(ip, parseInt(prt));
nsocket = new Socket();
//nsocket.setPerformancePreferences(0,1,2);
nsocket.setSendBufferSize(13);
nsocket.setTcpNoDelay(true);
// this is from timer, run every 20ms
nos = nsocket.getOutputStream();
nos.write(cmd);
nos.flush();
But android will send it with 100ms period! and together all bytes from every 30ms period of my timer
I think, it causes naggle algorithm. It's possible to switch it off. I found in documentation somethink about TCP_NODELAY, how can I use it? Is there any way how to send data to socket from android immediatelly?

I believe if you use SQlite and go into the database. You can add TCP_NODELAY, with a value of 1.

Related

MortScritp script trying to fix notification queue bug.

Hi,
I'm trying to write script fighting with problems with notification queue. Script is supposed to run MemMaid (which clears duplicate and messed up reminders in queue), sleep given period of time and repeat this procedure (several times a day). While script is in sleep() function device powers off. And here is my problem - sometimes script wakes up PDA, but not always. Another strange thing - I observed that if I run script, wait on device power off - I can then switch device off manually and script works OK.
Here is the source. I don't know much about PDA's suspend mode so maybe it should be written another way or it can't be done. Thank you in advance.
Script:
t=2
// minutes of sleeping only 2 - for testing only
t=60000*t
toggleDisplay(1)
message("Start")
repeat (300)
toggleDisplay(1)
runwait("\windows\MemMaidLauncher.exe","-Q")
sleep(t)
endrepeat
what version of mortscript are you using? I'm using v4.01b12 (I think it's up to beta 22 or something), anyway, In my documentation the values for toggle display are (off) and (on), using toggledisplay(on) has never failed for me. another thing you might try, instead of SLEEP, is to rescedule your script to run using RUNAT.
this is the script I use to run weather watcher everyday at 7am:
# Schedule WWupdate to run daily at 7am
# initialize variables
CurrentHour = 0
CurrentMinute = 0
CurrentSecond = 0
CurrentDay = 0
CurrentMonth = 0
CurrentYear = 0
target1 = (SystemPath("ScriptPath") & "\WWupdate.mscr")
target2 = (SystemPath("ScriptPath") & "\SchedWWupdate.mscr")
#Figure out what year, month, and day it is
GetTime ( CurrentHour, CurrentMinute, CurrentSecond, CurrentDay, CurrentMonth, CurrentYear )
If (CurrentHour < 7)
RunAt( CurrentYear, CurrentMonth, CurrentDay, 7, 0, target1)
Rerun = Timestamp()+86401
RunAt( Rerun, target2 )
EndIf
If (CurrentHour > 6)
RunAt( CurrentYear, CurrentMonth, CurrentDay+1, 3, 0, target2 )
EndIf
SleepMessage( 10, "WWupdate set to autorun at 7:00am", "Complete...", 1)
ToggleDisplay(OFF)
Good Luck!
Thank you for answer, but I think RunAt function puts task into notification queue so if queue is soiled with buggy reminders our task will never work. Correct me please if I'm wrong.
I don't know anything about buggy reminders, but the only way to see if it works is to try it. When someone tells me something doesn't work, I try it anyway, very often I find that the naysayers are wrong. Nothing risked, nothing gained! There are freeware tools to clean the notification queue. I kept having these notifications about 'replog' and 'calnot' and ran one of these tools that fixed the problem. I've never seen or heard of a situation where one notification interfered with another.
joemanb said:
I don't know anything about buggy reminders, but the only way to see if it works is to try it. When someone tells me something doesn't work, I try it anyway, very often I find that the naysayers are wrong. Nothing risked, nothing gained!
Click to expand...
Click to collapse
I've tried RunAt and I'm sure it puts task into NQ.
joemanb said:
There are freeware tools to clean the notification queue. I kept having these notifications about 'replog' and 'calnot' and ran one of these tools that fixed the problem. I've never seen or heard of a situation where one notification interfered with another.
Click to expand...
Click to collapse
So you are lucky... I had several cases when NQ had bad entries and any of reminders and scheduled tasks didn't work - so I tried to write script not using NQ for scheduling tasks. I tried many freeware and shareware tools and some of them did a good job, but problems is - how to run them when NQ scheduling doesn't work.
Thanks anyway.

[Q] HTC EVO EPST settings HELP!!!

i am wondering if anyone could explain what the menu options mean. i know a few of them. But am curious as to what the others mean. I am in a very rural area and have the options for 3g, and Satellite thats it. and i am trying to open up my phone with this or some other settings/rooting to get a better ping. i can get a 250 ping at 2AM, but at around 8PM ill have a 800-1500 ping. which is no good.
so if anyone could either help me out or point me in the right direction IE post a link to a forum that i may have overlooked, or flat out tell me what i have to do, or bits and pieces of the the puzzle come in one at a time. any information would help.
my signal is around -92 to -95. My reading and research states that a -60 is damn near perfect signal and a -112 is loosing calls.
EPST
-Data Profile
*active profile
-1
*SPI MN-HA
-4D2
*SPI-AAA
-4D2
*Reverse Tunnel Preferred
-Enabled
*Home IP address
-0.0.0.0
*primary Home Agent
-255.255.255.255
*secondary home agent
-(default 68.28.157.76) i changed it to 68.28.145.76 (Chicago HA closer then the default which is Roachdale,IN, Chicago is closer to my location.)
*HA Shared secret
-******
*AAA Shared Secret
-******
-EVDO
*DDTM(this disables incoming calls for something i dont know what..)
-disabled
*Preferred Mode
-automatic
#modify: automatic, HDR only, Digital Only, CDMA only
(i know that CDMA is 2g),CDMA HDR Only.
-Advanced
*MEID(Hex)
-a long code.. dont want to give out personal info..
*p_rev
-IS-2000 (6)
*SCM
-58
*Slot Mode
-enabled
*Slot Cycle Index
-2 (i know that this is Default, and that this is the rate at which the phone checks to see if you are receiving a phone call, i think that 2=3.24sec or something around there)
*MCC(mobile country code)
-310
*MNC(mobile network code)
-00
*msid
-looks to be a phone number 10 digits with my area code
*ACCOLC(Access Overload Class)
-5
*vocoder
-enabled
*EVRC-B
-disabled
*roam orig
-voice 13k
*LBS PDE IP
-<not available>
*LBS PDE Port
-<Not available>
*Home SID/NID #1
-4139/65535
-WiMAX
*standby time (minute)
-10
*scan rate (Minute)
-30
*center frequency
-2647000,2657000,2667000
*bandwith
-10,10,10
*NAPDIS
-000002 #modify: 000004
*NSPDIS
-000004 #modify: 000002
*WiMAX_scan_attempt_timeout(s)
-1
*WiMAX_scan_Retry(s)
-120
*Wimax_Idle_sleep(s)
-10
*WiMAX_Entry_RX(RSSI)(dBm)
- -60
*WiMAX_Entry_CINR(dB)
-4
*WiMAX_Entry_Delay(s)
-300
*WiMAX_Exit_CINR(dB)
-2
*WiMAX_Exit_Delay(s)
-2
*Handoff Threshold
-0
*Idle Mode Timer
-0
Which settings do you need in epst?
Sorry that was a mistake.
I need the shared secrets.
ha and aaa for frofile 1 and profile 0. after sending spc in qxdm and running "06:18:32.773 requestnvitemread ds_mip_ss_user_prof 0" i get all zeros in return for both secrets and both profiles. ugggg. cant get my data working.

Do build.prop tweaks actually work? - A guide

While searching deeper into build.prop tweaks, I came across this article. It was originally posted in Jeff Mixon's blog. At the moment the whole site is inaccessible, so it probably is a good idea to have it in full here (thanks to Google cache).
While its focused on ICS, some of the points made are aplicable to GingerBread too.
Examining build.prop tweaks for Android ICS: A comprehensive guide
(from: http://www.jeffmixon.com/examining-build-prop-tweaks-for-android-ics-a-comprehensive-guide-part-1/)
Android devices are great. They can easily be hacked and tweaked to seemingly no end. There’s a lot of great info out there detailing step-by-step instructions on how to perform various enhancements to your particular device. Unfortunately, there is also a fair amount of misinformation being disseminated via forums and blogs that can have a very negative effect on your device. Just because you can change something doesn’t necessarily mean you should. In this guide, I will walk through various common Android build.prop settings and evaluate whether or not you should actually change them on your Android ICS device, MythBusters style.
windowsmgr.max_events_per_sec – BUSTED
As the name implies, this property specifies how quickly the system is allowed to process certain events before throttling occurs. Specifically, this property is used by the InputDispatcher when processing screen touch and movement events. This value will really only come in to play with extremely rapid touch events, such as swiping or scrolling. The default value for this property is 90, and Google explains why:
Code:
// This number equates to the refresh rate * 1.5. The rate should be at least
// equal to the screen refresh rate. We increase the rate by 50% to compensate for
// the discontinuity between the actual rate that events come in at (they do
// not necessarily come in constantly and are not handled synchronously).
// Ideally, we would use Display.getRefreshRate(), but as this does not necessarily
// return a sensible result, we use '60' as our default assumed refresh rate.
result = 90;
Many build.prop tweaks set this value to 300, but it seems this is a bad idea. As Google points out, Android maxes out at 60fps. The default value is already allow for a possible max_events_per_sec of 90. Even if you allow for 300 max_events_per_sec, you’ll only ever see 60 of these events in any given second. Therefore, any value much higher than 90 is unlikely to have any noticeable impact on your experience in general. Additionally, setting this value too high can starve other UI events that need to get processed, viz. touch inputs. You’re not likely to feel like your device is running very smoothly when it is busy processing thousands of scroll events instead of responding immediately to you clicking to try and open a link or an app. There may be some specific scenarios where increasing this value does appear to improve system feedback, but changing this value for all UI events across the board will likely cause more problems than it will solve.
dalvik.vm.heapgrowthlimit and dalvik.vm.heapsize - BUSTED
Android devices are getting buffer every day and along with that, the amount of RAM devices have available has increased significantly. Devices with 1GB of RAM are now common; some are even equipped with 2GB already.
This is one property that has cropped up recently in various build.prop recommendations for ICS. Typical suggested values range from “48m” all the way up to “256m”, likely motivated by the common misconception that more is better. The real purpose of this property is much less obvious than one might initially guess. It is also another one you should probably avoid changing.
In ICS, the dalvik.vm.heapgrowthlimit property takes over as the effective dalvik.vm.heapsize property. Applications will be restricted to the value set by this property by default. Google has this to say about it:
Code:
// The largest size we permit the heap to grow. This value allows
// the user to limit the heap growth below the maximum size. This
// is a work around until we can dynamically set the maximum size.
// This value can range between the starting size and the maximum
// size but should never be set below the current footprint of the
// heap.
An indeed, we can see this enforced in several places in the Heap and HeapSource structures. Including:
Code:
if (overhead + HEAP_MIN_FREE >= hs->maximumSize) {
LOGE_HEAP("No room to create any more heaps "
"(%zd overhead, %zd max)",
overhead, hs->maximumSize);
return false;
}
heap.maximumSize = hs->growthLimit - overhead;
As we see here, the heap’s maximum size is determined by the growthLimit variable on the HeapSource structure. We can also see a check to ensure there is enough total heap space available to begin with before attempting to create the new heap. At first blush, it looks like like growthLimit (dalvik.vm.heapgrowthlimit) is redundant and synonymous with maxiumSize (dalvik.vm.heapsize). It seems that could set the dalvik.vm.heapgrowthlimit to 64M and the dalvik.vm.heapsize to 256M and only the growthLimit value would be used to govern a heap’s maximum size. That’s where this interesting method comes in:
Code:
/*
* Removes any growth limits. Allows the user to allocate up to the
* maximum heap size.
*/
void dvmClearGrowthLimit()
{
...
gHs->growthLimit = gHs->maximumSize;
size_t overhead = oldHeapOverhead(gHs, false);
gHs->heaps[0].maximumSize = gHs->maximumSize - overhead;
gHs->heaps[0].limit = gHs->heaps[0].base + gHs->heaps[0].maximumSize;
...
}
This method effectively removes any limitation set by dalvik.vm.heapgrowthlimit and sets the maximum heap size to the value defined by dalvik.vm.heapsize (or the hard-coded default of 16M). This method is wired up straight to the Dalvik runtime implementation as a native call and we can see it defined here:
Code:
static void Dalvik_dalvik_system_VMRuntime_clearGrowthLimit(const u4* args, JValue* pResult)
{
dvmClearGrowthLimit();
RETURN_VOID();
}
...
const DalvikNativeMethod dvm_dalvik_system_VMRuntime[] = {
...
{ "clearGrowthLimit", "()V",
Dalvik_dalvik_system_VMRuntime_clearGrowthLimit },
...
};
And if we keep chasing this rabbit “up” the rabbit hole, we can see it finally in action here in the ActivityThread Java class:
Code:
if ((data.appInfo.flags&ApplicationInfo.FLAG_LARGE_HEAP) != 0) {
dalvik.system.VMRuntime.getRuntime().clearGrowthLimit();
}
This flag is set by a relatively new attribute (API level 11) which you can add to the application element in an Android application’s manifest file.
So what does this all mean? The dalvik.vm.heapgrowthlimit property limits how large an Android application’s heap can get before garbage collection has to be attempted. The dalvik.vm.heapsize property defines an absolute maximum for the heap size for an application even when the largeHeap flag is set in the manifest. Google’s motivation behind doing this was clearly to limit the heap size to a reasonable amount for most applications, but also give some flexibility to app developers who know they’re going to need the largest heap size possible to run their application.
Should you change this setting? Probably not. The ICS default for a phone with (at least) 1024MB of RAM is 64m. You can check your specific phone’s value as the hardware vendor can override this themselves when they build the ROM. But don’t let the disparity between 1024 and 64 bother you; most mobile apps should not have any problems with 64MB of heap size unless the developers are naughty. When this limit is reached, a garbage collection routine will remove obsolete objects from memory reducing the heap size down considerably in most cases. It is extremely unlikely raising this value to reduce GC routines will have any perceptible effect. If anything, it could cause other apps or the general system to suffer from too many stale objects sulking around in memory. Garbage collection will inevitably occur either way, and when it does, the size of the heap will likely have a direct impact on the cost of the routine.
The point is, it is impossible for a user to optimize for every application using this system-wide setting. This responsibility falls on application developers to optimize their applications, not users. The largeHeap flag was created to allow developers to do just that. If you do feel compelled to experiment with this setting regardless, be mindful that an application could have up to two heaps at once. Thus, the heap growth limit value should always be, at most, a little less than half of the maximum allowable heap size.
debug.performance.tuning – BUSTED
This property doesn’t appear to exist in the ICS code base. Incidentally, I also don’t see it in Gingerbread (2.3.6). It’s possible this value is specific to only certain custom implementations of Android, such as Cyanogenmod, but as far as I can tell, this one does nothing.
video.accelerate.hw – BUSTED
Again, this one appears to do nothing in ICS.
persist.adb.notify – CONFIRMED
This one disable the USB debugging notification when you have your device connected to a computer. We can see this used here:
Code:
private void updateAdbNotification() {
if (mNotificationManager == null) return;
final int id = com.android.internal.R.string.adb_active_notification_title;
if (mAdbEnabled && mConnected) {
if ("0".equals(SystemProperties.get("persist.adb.notify"))) return;
This is a good one to disable (set to “0″) if you want to declutter your notification bar.
persist.sys.purgeable_assets – BUSTED
This one claims to free up memory, however it is nowhere to be found in the Android code base. This one is a patch made to Cyanogenmod 7 to do some Bitmap hackery. It may not even exist in CM9. Unless you are running CM7, or possibly CM9, this property has no effect.
persist.sys.use_dithering – BUSTED
Another Cyanogenmod-specific property. This will have no effect on stock ICS.
dalvik.vm.execution-mode – BUSTED
This property can set the execution mode of the Dalvik VM. The VM can run in three modes: fast, portable, and very likely JIT. It is possible to compile Android without JIT support, but the default is to include it. In general, JIT is the execution mode you are going to want on your device. This is why you will see most build.prop files setting this property to “init:jit”. However, this is unnecessary since the default execution mode is JIT:
Code:
#if defined(WITH_JIT)
gDvm.executionMode = kExecutionModeJit;
#else
...
As I mentioned before, WITH_JIT compiler flag is set by default in ICS, thus there is no need to define this setting in your build.prop. If WITH_JIT flag was set to false, setting the execution mode to JIT would have no effect anyway.
dalvik.vm.dexopt-flags – PLAUSIBLE
This property can set various options that affect how the Dalvik runtime performs optimization and verification. Suggested values range from turning bytecode verification off completely to enabling object registry mapping and precise garbage collection. Setting “v=n” will turn off bytecode verification, which while in all practicality is unlikely to cause any problems directly, it is a severe violation of the whole Java trust and security model.
On the other hand, setting “m=y” will turn on the register map for tracking objects to garbage collect. Incidentally, this also enables “precise” garbage collection, which, as you may have guessed is a slightly more accurate way to track objects for garbage collection. By accurate we mean less likely to falsely identify an object as still in use when in fact it is no longer being used. This would, in theory, be a more efficient mechanism to free up unused objects, and thus increase available memory (RAM).
Enabling precise GC seems like a good idea as long as your device has the muscle to spare, but I can not find enough information to know for sure what the total implication is to using precise GC versus conservative. I suspect it likely a trade-off of more cpu cycles per collection to obtain more free RAM. You will have to decide which one is more important to you based on your device capabilities and personal tolerance levels. This is assuming that the difference will even be perceptible in a real-world environment, which I suspect would be less than profound, if any at all.
ro.media.dec.jpeg.memcap – BUSTED
This property is one of many that promises to make your audio and visual experience better. Unfortunately, not only is it only related to JPEG decompression, it is completely unused in ICS (and Gingerbread for that matter) and has no effect on your device.
At first, it looks kind of promising:
Code:
// Key to lookup the size of memory buffer set in system property
static const char KEY_MEM_CAP[] = "ro.media.dec.jpeg.memcap";
Ok, cool. The property exists at least. However, that’s the only place this is referenced. The KEY_MEM_CAP is never used anywhere. If we look a little closer, we’ll come across this:
Code:
/* Check if the memory cap property is set.
If so, use the memory size for jpeg decode.
*/
static void overwrite_mem_buffer_size(j_decompress_ptr cinfo) {
#ifdef ANDROID_LARGE_MEMORY_DEVICE
cinfo->mem->max_memory_to_use = 30 * 1024 * 1024;
#else
cinfo->mem->max_memory_to_use = 5 * 1024 * 1024;
#endif
}
This looks like exactly where this property should be used, but isn’t. The value is clearly hardcoded here. In fact, if we look up the Skia project source tree on Google code, we can see that the latest version has this variable commented out now with the following comment:
Code:
/* For non-ndk builds we could look at the system's jpeg memory cap and use it
* if it is set. However, for now we will use the NDK compliant hardcoded values
*/
//#include <cutils/properties.h>
//static const char KEY_MEM_CAP[] = "ro.media.dec.jpeg.memcap";
ro.media.enc.hprof.vid.bps – CONFIRMED
This one is generally grouped together as a general “media” enhancement tweak. This particular property controls the “high” bit rate at which videos are encoded using the Android stock camera application. That means if you are using a third party camera app, this setting will have no effect. Let’s take a look at the snippet.
Code:
mVideoBitrate = getInt("ro.media.enc.hprof.vid.bps",
"ro.media.enc.lprof.vid.bps",
360000, 192000);
While the default values may seem very low, these are very unlikely to ever be necessary. When a device manufacturer deploys the production ROM, they will define many properties in a different property file (viz. “system.prop”), which will contain values specific to the hardware. Those values are going to be used instead of the hard coded ones we see here.
For example, if I launch a shell on a Galaxy S3 and run the following command:
Code:
getprop ro.media.enc.hprof.vid.bps
I will get a value back of “12000000″, even though I do not have this property defined in my build.prop. This is because it was defined in the default.prop file by Samsung knowing the capabilities of the device and the camera.
This setting can definitely be useful if these values are important to you, just be sure you’re not setting the value to the same (or worse yet lower!) than what is already defined on your device. While you’re at it, you may want to tweak some of the other values:
Code:
ro.media.enc.hprof.aud.bps
ro.media.enc.hprof.codec.vid
ro.media.enc.hprof.codec.aud
ro.media.enc.hprof.aud.hz
The names are pretty self-explanatory, but you can find out more info about each one with a little bit of Google’ing.
Summary
The build.prop file is a powerful tool for root users to modify the behavior of various aspects of the Android experience. However, there are no secret values here that are going to instantly make your phone run faster, better, smarter, or more efficiently. In fact, if you don’t know what you are doing, you can actually decrease the performance of your device. Over time I will investigate more build.prop properties to determine which ones can actually enhance your Android experience and which ones only create a placebo effect.
Nice But...
We all know the .prop is powerful but certainly it doesn't mean you can't modify strings to get your Rom/phone to run faster, in that case you're busted " Meaning as in by the person who had posted this" Not you xda guy xD.
If you are running ICS or GB try this out : Add it to the bottom of your build.prop string after the last string...
# dalvik props
dalvik.vm.heapstartsize=5m
dalvik.vm.heapgrowthlimit=48m
dalvik.vm.heapsize=128m
windowsmgr.max_events_per_sec=240
debug.enabletr=true
ro.max.fling_velocity=12000
ro.min.fling_velocity=8000
ro.ril.disable.power.collapse=1
ro.telephony.call_ring.delay=0
persist.adb.notify=0
dalvik.vm.dexopt-flags m=y,o=v,u=y
#Render UI with GPU
debug.sf.hw=1
#debug.composition.type=gpu
debug.composition.type=c2d
debug.performance.tuning=1
debug.enabletr=true
debug.qctwa.preservebuf=1
dev.pm.dyn_samplingrate=1
video.accelerate.hw=1
ro.vold.umsdirtyratio=20
debug.overlayui.enable=1
debug.egl.hw=1
ro.fb.mode=1
hw3d.force=1
persist.sys.composition.type=c2d
persist.sys.ui.hw=1
ro.sf.compbypass.enable=0
# persist.sys.shutdown.mode=hibernate
ro.config.hw_quickpoweron=true
ro.lge.proximity.delay=25
mot.proximity.delay=25
ro.mot.buttonlight.timeout=0
If you are running GB : Remove the following strings from above...
#Render UI with GPU
debug.sf.hw=1
Now watch the magic
krishneelg3 said:
We all know the .prop is powerful but certainly it doesn't mean you can't modify strings to get your Rom/phone to run faster, in that case you're busted " Meaning as in by the person who had posted this" Not you xda guy xD.
If you are running ICS or GB try this out : Add it to the bottom of your build.prop string after the last string...
# dalvik props
dalvik.vm.heapstartsize=5m
dalvik.vm.heapgrowthlimit=48m
dalvik.vm.heapsize=128m
windowsmgr.max_events_per_sec=240
debug.enabletr=true
ro.max.fling_velocity=12000
ro.min.fling_velocity=8000
ro.ril.disable.power.collapse=1
ro.telephony.call_ring.delay=0
persist.adb.notify=0
dalvik.vm.dexopt-flags m=y,o=v,u=y
#Render UI with GPU
debug.sf.hw=1
#debug.composition.type=gpu
debug.composition.type=c2d
debug.performance.tuning=1
debug.enabletr=true
debug.qctwa.preservebuf=1
dev.pm.dyn_samplingrate=1
video.accelerate.hw=1
ro.vold.umsdirtyratio=20
debug.overlayui.enable=1
debug.egl.hw=1
ro.fb.mode=1
hw3d.force=1
persist.sys.composition.type=c2d
persist.sys.ui.hw=1
ro.sf.compbypass.enable=0
# persist.sys.shutdown.mode=hibernate
ro.config.hw_quickpoweron=true
ro.lge.proximity.delay=25
mot.proximity.delay=25
ro.mot.buttonlight.timeout=0
If you are running GB : Remove the following strings from above...
#Render UI with GPU
debug.sf.hw=1
Now watch the magic
Click to expand...
Click to collapse
Incresing hp to 128 your battery is going down like a hell
Sent from my E15i using xda premium
It work 200%
If u do it correctly
I increased my GPRS class from 10 to 12
And it work
Tested on mini cm10
if I helped press thanks :thumbup:
ElmirBuljubasic said:
Incresing hp to 128 your battery is going down like a hell
Sent from my E15i using xda premium
Click to expand...
Click to collapse
Elmir what's a good setting 120? 110?
Sent from my Ascend G300 using Tapatalk 2
ro.HOME_APP_ADJ=0 This line sends the settings app to cache and frees some RAM in addition to blocking the Xperia launcher to not restart when leaving a heavy application.
this line works on GB
sorry for my bad English
Nice copy and paste
Great :thumbup:
Sent from my E15i using xda app-developers app

huge amount of AudioFlinger messages in logcat

I'm a new user, not allowed to post in development forum. So forgive me for posting here. This is in response to the "[ROM][AOSP][hermes] Delirium Sense7 thread. This ROM produces 80KB in logcat every second, which is the repetition of the same message:
Code:
D/AudioALSACaptureDataProviderNormal( 263): readThread, latency_in_us,0.000119,0.000299,0.001681
D/AudioFlinger_Threads( 263): need 54274 frames in to produce 18091 out given in/out ratio of 3
D/AudioFlinger_Threads( 263): not enough to resample: have 1025 frames in but need 54274 in to produce 18091 out given in/out ratio of 3
D/AudioFlinger_Threads( 263): now need 1024 frames in to produce 341 out given out/in ratio of 0.3333
D/AudioFlinger_Threads( 263): success 2: have 1025 frames in and need 1024 in to produce 341 out given in/out ratio of 3
V/AudioTrackShared( 263): mAvailToClient=0 stepCount=341 minimum=342
D/AudioFlinger_Threads( 263): need 53251 frames in to produce 17750 out given in/out ratio of 3
D/AudioFlinger_Threads( 263): not enough to resample: have 2 frames in but need 53251 in to produce 17750 out given in/out ratio of 3
The messages hint a problem in audio re-sampling, but since the mobile phone produces this messages without any audio sounded, it's either misbehaving or a secret recorder has been running. Anyway, is this an ignorable problem? Perhaps the ROM set debug level too high (I'm new to android dev). I currently filter out these messages in order to work on my other projects.
Eventually I am switching away because of the following bugs.
- usually my SPI client (nonoh) produce noise while dialing
- Google Map freezes, mostly when I am on the move (GPS location change)
- Tethering doesn't work - the other wifi clients connects, gets IP address but can't surf.
Hope the next ROM to try is better.
FWIW, this is due to "Google Now" detection being set to "From any screen". Turn that off, and these will go away (but if it's only when charging and screen on, it might not be that big of a deal).

[MODULE] KTweak - Backed by evidence

Another "kernel optimizer"?
No. Well, yes. However, a "kernel optimizer" is a poor way to put it. KTweak performs kernel adjustments based on facts and evidence, unlike other optimizers with poorly written or heavily obfuscated code. For example:
LSpeed is almost 4000 lines long; completely unnecessary.
NFS Injector uses compiled binaries that are closed source... yuck. Not to mention the typos in the README. This one is hard to look at.
LKT sets random nonsensical build.props that likely don't even exist.
MAGNETAR uses (you guessed it) compiled binaries that install themselves to your /system/etc/ directory (???). Great idea, install an external closed source, compiled binary to the system partition.
Need I go on?
What's different about KTweak?
Unlike other "kernel optimizers", KTweak is:
Concice, at around 200 lines long,
Entirely open source with no compiled components,
Backed by logic and evidence,
Designed by an experienced kernel developer,
Non-intrusive, being completely systemless.
Benchmarks
The following benchmarks were performed on a OnePlus 7 Pro running the stock kernel provided by the OEM on Android 10.
hackbench -pTl 4000 (lower is better)
Without KTweak: ~20-50 seconds on average
With KTweak: ~4-6 seconds on average
perf bench mem memcpy (lower is better) (average of 50 iters)
Without KTweak: 14.01 ms
With KTweak: 10.40 ms
synthmark (voicemark) (higher is better)
Without KTweak: 374.94
With KTweak: 383.556
synthmark (latencymark little) (lower is better)
Without KTweak: 10
With KTweak: 10
synthmark (latencymark big) (lower is better)
Without KTweak: 12
With KTweak: 10
The Tweaks
In order to remain genuine, I have commited to explaining each and every kernel tweak that KTweak applies. Grab your coffee, this could take a while.
kernel.perf_cpu_time_max_percent: 25 --> 5
This is the maximum CPU time long perf event processing can take as a percentage. If this percentage is exceeded (meaning perf event processing used too much CPU time), the polling rate is throttled. This is reduced from 25% to 5%. We can afford inaccuracies with perf events in exchange for more time that a foreground task can use.
kernel.sched_autogroup_enabled: 0 --> 1
The Linux Kernel scheduler (CFS) distributes timeslices to each active task. For example, if the scheduling period is 10ms, and there are 5 tasks running, CFS will give each task 2ms of runtime for that scheduling cycle. However, this means that a SCHED_OTHER task may compete with a SCHED_FIFO task. Autogrouping groups task groups together during scheduling. For example, if the scheduling period is 10ms, and there are 6 SCHED_OTHER tasks running and 4 SCHED_FIFO tasks running, the SCHED_OTHER tasks will get 50% of the runtime and the SCHED_FIFO tasks will get the other 50%. For each task group, the timeslices are once again divided. The SCHED_FIFO tasks will get 12.5% runtime and the SCHED_OTHER tasks will get ~8.3% runtime. This usually offers better interactivity on multithreaded platforms. See scheduling priority documentation: https://man7.org/linux/man-pages/man7/sched.7.html See autogrouping off: https://www.youtube.com/watch?v=uk70SeGA7pg See autogrouping on: https://www.youtube.com/watch?v=prxInRdaNfc
kernel.sched_enable_thread_grouping: 0 --> 1
To my knowledge using the limited documentation of this tunable, this is basically autogrouping for thread groups.
kernel.sched_child_runs_first: 0 --> 1
When forking a child process from the parent, execute the child process before the parent process. This usually shaves down some latency on task initializations, since most of the time the child process is doing some form of heavy lifting.
kernel.sched_downmigrate: 20 20
Do not allow tasks to migrate back down to a lower-power CPU until the estimated CPU utilization would go below 20% on said CPU. This means tasks will stay on higher-performance CPUs for longer than usual.
kernel.sched_upmigrate: 80 80
Similar to the previous tunable, do not allow CPUs to migrate to the higher-performance CPUs unless the utilization goes above 80%.
kernel.sched_group_downmigrate: 20
The same as kernel.sched_downmigrate, except for whole task groups.
kernel.sched_group_upmigrate: 80
The same as kernel.sched_upmigrate, except for whole task groups.
kernel.sched_tunable_scaling: 0
This is more of a precaution than anything. Since the next few tunables will be scheduler timing related, we don't want the scheduler to scale our values for multiple CPUs, as we will be providing CPU-agnostic values.
kernel.sched_latency_ns: 10000000 (10ms)
Set the default scheduling period to 10ms. If this value is set too low, the scheduler will switch contexts too often, spending more time internally than executing the waiting tasks.
kernel.sched_min_granularity_ns: 1000000 (1ms)
Set the minimum task scheduling period to 1ms. With kernel.sched_latency_ns set to 1ms, this means that 10 tasks may execute within the 10ms scheduling period before we exceed it.
kernel.sched_migration_cost_ns: 500000 (0.5ms) --> 1000000 (1ms)
Increase the time that a task is considered to be cache hot. According to RedHat, increasing this tunable reduces the number of task migrations. This should reduce time spent balancing tasks and increase per-task performance. See RedHat: https://www.redhat.com/files/summit...tuning-of-Red-Hat-Enterprise-Linux-Part-1.pdf
kernel.sched_min_task_util_for_boost: 25
This value effects if tasks should be migrated to a higher performant CPU if it's utilization is above this amount. Allow tasks to be migrated upwards if the user is triggering a touch boost and the task is above 25% utilization.
kernel.sched_min_task_util_for_colocation: 50
This value is the same as the former, except it occurs when the user is not touching the screen. We shouldn't upmigrate tasks if the user isn't actively interacting with them (i.e. video streaming).
kernel.sched_nr_migrate: 32 --> 64
When migrating tasks between CPUs, allow the scheduler to migrate twice as many as usual. This should increase scheduling latency marginally, but increase the performance of SCHED_OTHER tasks.
kernel.sched_schedstats: 1 --> 0
Disable scheduler statistics accounting. This is just for debugging, but it adds overhead.
kernel.sched_wakeup_granularity_ns: 1000000 (1ms) --> 5000000 (5ms)
Require the current task to be surpassing the new task in vmruntime by 5ms instead of 1ms before preemption occurs. This should reduce jitter due to less frequent task interruptions.
kernel.timer_migration: 1 --> 0
Disable the migration of timers among CPUs. Usually, when a timer is created on one CPU, it would be able to be migrated to another CPU. However, this increases realtime latencies and scheduling interrupts. It can be turned off.
net.ipv4.tcp_ecn: 2 --> 1
Enable Explicit Congestion Notification for incoming and outgoing negotiations. This reduces packet losses.
net.ipv4.tcp_fastopen: 3
Enable data transmission during the SACK exchange point in TCP negotiation. This reduces packet latencies. Enable it for senders and receivers.
net.ipv4.tcp_syncookies: 1 --> 0
This tunable, when enabled, prevents denial of service attacks by allowing connection ACKs to be tracked. However, this is more-or-less unnecessary for a mobile device. It is more applicable for servers. Disable it.
net.ipv4.tcp_timestamps: 1 --> 0
RedHat claims that TCP timestamps may cause performance spikes due to time accounting code on high-performance connections. Disable it. See RedHat: https://access.redhat.com/documenta...ml/tuning_guide/reduce_tcp_performance_spikes
vm.compact_unevictable_allowed: 1 --> 0
Do not allow compaction of unevictable pages. With this set to 1, more compactions can happen at the cost of small page fault stalls. Turn this off to compact less but avoid aforementioned stalls.
vm.dirty_background_ratio: 5 --> 10
Start writing back dirty pages (pages that have been modified but not yet written to the disk) asynchronously at 10% memory dirtied instead of 5%. Writing dirty pages back too early can be inefficient and overutilize the storage device.
vm.dirty_ratio: 20 --> 30
This tunable is the same as the former, but it is the ceiling for synchronous dirty writeback, meaning all I/O will stall until all dirty pages are written out to the disk. We usually won't need to worry about hitting this value, as the background writeback can catch up before we reach 20% memory dirtied. But as a precaution (i.e. heavy file transfers), increase this value to a 30% ceiling to prevent visible system stalls. We are sacrificing available memory in exchange for a reduced change of a brief system stall.
vm.dirty_expire_centisecs: 300 (3s) --> 1000 (10s)
This is the longest that dirty pages can remain in the system before they are forcefully written out to the disk. By increasing this value, we can allow the dirty background writeback to take its time asynchronously, and avoid unnecessary writebacks that can clog the flusher thread.
vm.dirty_writeback_centisecs: 500 (5s) --> 0 (0s)
Do not periodically writeback data every 5 seconds. Instead, leave it to the dirty background writeback to wake up when the dirty memory of the system hits 10%. This allows the dirty pages to stay in memory for longer, possibly increasing cache locality as the page cache is still available in memory.
vm.extfrag_threshold: 500 --> 750
Compact memory more often, even if the memory allocation was estimated to be due to a low-memory status. This lets us put more data into RAM at the expense of running compation more often. This is a worthy tradeoff, as it reduces memory fragmentation, which is incredibly important for ZRAM.
vm.oom_dump_tasks: 1 --> 0
Do not dump debug information when (or if) we run out of memory. If we have a lot of tasks running, and are OOMing often, then this overhead can add up.
vm.page-cluster: 3 --> 0
Disable reading additional pages from the swap device (in most cases, ZRAM). This is the same philosophy as disabling readahead.
vm.reap_mem_on_sigkill: 0 --> 1
When we kill a task, clean its memory footprint to free up whatever amount of RAM it was consuming.
vm.stat_interval: 1 --> 10
Update /proc/stat information every 10 seconds instead of every second, reducing jitter on loaded systems.
vm.swappiness: 100 --> 80
Swap to ZRAM less often if we don't have to. ZRAM can become expensive due to constant compression and decompression. If we can keep some of the memory uncompressed in regular RAM, we can avoid that overhead.
vm.vfs_cache_pressure: 100 --> 200
This tunable controls the kernel's tendency to reclaim inodes and dentries over page cache. Inodes and dentries are information about file metadata and directory structures, while page cache is the actual cached contents of a file. By increasing this value to 200, we tell the kernel to prefer claiming inodes and dentries over the page cache, increasing the chance of a cache hit when referencing recently used data, while not polluting the RAM with less-important information.
Next Buddy
By scheduling the last woken task first, we can increase cache locality since that task is likely to touch the same data as before.
No Strict Skip Buddy
Usually, the scheduler will always choose to skip tasks that call yield(). However, these yeilding tasks may be of higher importance than the last or next buddy that are available. Do not always skip the skip buddy if we don't have to.
No Nontask Capacity
The scheduler decrements the perceived CPU capacity that longer the CPU has been idle for. This means that an idle CPU may be skipped during task placement, and a task can be grouped with a busier CPU. Disable this to improve task start latency.
TTWU Queue
Allow the scheduler to place tasks on their origin CPU, increasing cache locality if the CPU is non-local (i.e. a cache hit would definitely have been missed).
Governor Tweaks
hispeed_load: 90 --> 80: Jump to a higher frequency if we are approaching the end of the frequency list, where a task may begin to starve or begin to stutter.
hispeed_freq: : Set the "higher freq" (referencing hispeed_load) to the maximum frequency available to take advantage of Race-To-Idle.
CAF CPU Boost Tweaks
input_boost_freq: 1.4 GHz (closest freq) as a generic, universal boost frequency to the little cluster.
input_boost_ms: 250 ms, not consuming too much power but boosting for important, interactive events such as clicking on things.
I/O
iostats: 1 --> 0: Disable I/O statistics accounting, which adds overhead.
readahead: 0: Disable readahead, which is intended for disks with long seek times (HDD), whereas mobile devices use flash storage with zero seek time.
nr_requests: 128 --> 512: Allow more I/O requests to be issued before flushing the queue, slightly increasing latencies but allowing more requests to be executed before being put to sleep.
noop / none: Use a scheduler with little CPU overhead to reduce I/O latencies, which is essential for fast flash storage (eMMC & UFS).
ZRAM
ZRAM reduces disk wear by reducing disk writes, and also increases cache locality by allowing more data to fit in RAM at once. KTweak configures ZRAM to take up at most half of the available RAM on the system, which is a good ratio of RAM to ZRAM for a mobile device.
Other Notes
You should know that KTweak applies after 60s of uptime as to prevent Android's init from overwriting any values.
Contact
You can find me on telegram at @tytydraco. Feel free to email me at [email protected].
Downloads
All releases and the entire source code for KTweak is available on GitHub:
Downloads
XDA:DevDB Information
KTweak, Tool/Utility for all devices (see above for details)
Contributors
tytydraco, tytydraco
Source Code: https://github.com/tytydraco/ktweak
Version Information
Status: Stable
Current Stable Version: v1.0.7
Stable Release Date: 2020-08-16
Created 2020-08-16
Last Updated 2020-08-16
What are the requirements to use this? Root with Magisk is a given - but Linux kernel version, Android OS version, device, etc?
MishaalRahman said:
What are the requirements to use this? Root with Magisk is a given - but Linux kernel version, Android OS version, device, etc?
Click to expand...
Click to collapse
The script adjusts only the tweaks that are compatible with your version. It contains tweaks for EAS, HMP, and supports 3.18 and above in testing. It likely supports even lower. Otherwise, it's totally universal.
KTweak now has an official Telegram channel for release information and changelogs: @ktweak
Thank you for your work and great explanation ?
This will work on lineage kernel?
Hello there. Im using ver 1.0.9. I updated and tried 1.1.0 but ended up in reboots after boot complete. I am using NX kernel which seta vfs cache pressure to 100. May that be the case?
Now back to 1.0.9 and everything seems fine. However, I had to uninstall and reinstall magisk, because when Ive flashed 1.0.9 ober 1.1.0, I was still experiwncing problems.
lapirado said:
Thank you for your work and great explanation ?
This will work on lineage kernel?
Click to expand...
Click to collapse
This should work on any kernel
myaslioglu said:
Hello there. Im using ver 1.0.9. I updated and tried 1.1.0 but ended up in reboots after boot complete. I am using NX kernel which seta vfs cache pressure to 100. May that be the case?
Now back to 1.0.9 and everything seems fine. However, I had to uninstall and reinstall magisk, because when Ive flashed 1.0.9 ober 1.1.0, I was still experiwncing problems.
Click to expand...
Click to collapse
Hi! Thanks for the report. Which device and Android version are you using? I have an idea of why this could be happening.
myaslioglu said:
Hello there. Im using ver 1.0.9. I updated and tried 1.1.0 but ended up in reboots after boot complete. I am using NX kernel which seta vfs cache pressure to 100. May that be the case?
Now back to 1.0.9 and everything seems fine. However, I had to uninstall and reinstall magisk, because when Ive flashed 1.0.9 ober 1.1.0, I was still experiwncing problems.
Click to expand...
Click to collapse
Hi myaslioglu,
I've released v1.1.1 which adds an additional 20 second sleep after Android reports that it has been initialized. This should prevent init and any post-boot init scripts from running alongside ktweak. I believe your issue stems from ZRAM resizing itself alongside bootup, where memory is most scarce, possibly causing your device to think it failed to bootup correctly.
Please let me know if v1.1.1 fixes your issue. It is live on GitHub releases and Telegram.
tytydraco said:
Hi myaslioglu,
I've released v1.1.1 which adds an additional 20 second sleep after Android reports that it has been initialized. This should prevent init and any post-boot init scripts from running alongside ktweak. I believe your issue stems from ZRAM resizing itself alongside bootup, where memory is most scarce, possibly causing your device to think it failed to bootup correctly.
Please let me know if v1.1.1 fixes your issue. It is live on GitHub releases and Telegram.
Click to expand...
Click to collapse
Hello. Sorry I forgot to report my device, it is s8 exynos just in case. But 1.1.1 fixed the issue. Works perfect! Thanks
myaslioglu said:
Hello. Sorry I forgot to report my device, it is s8 exynos just in case. But 1.1.1 fixed the issue. Works perfect! Thanks
Click to expand...
Click to collapse
1.1.2 got me reboots again. What has changed? Only the swappiness? Worked fine on 1.1.1 and did not work on 1.1.0 as aforementioned.
Can zram be the issue?
1.1.1 made my op7pro freeze with weeb kernel ??* and we have the same device draco
Doesn't work on stock kernel of redmi note 9s, freezes a few seconds after boot,forcing me to hard restart
It compatible with j5 (2015)?
To those of you getting freezes, I have identified the cause to be related to ZRAM. I will push an update today that will remove ZRAM tweaking from the script.
The reason I believe this is happening is because ktweak tries to resize the ZRAM. That requires all data that is currently in ZRAM to decompress and enter your main memory unit. If we run out of memory during this process, we will freeze.
The solution is to not adjust the zram size when using ktweak. Sorry for the inconveniences that may have been caused. I'll get straight to fixing this as soon as possible.
I've heard about your work from xda Telegram channel.
I read the info and thought to test it, but as some users reported as latest update having freeze issue. I'll test with freeze issue fixed update.?
tytydraco said:
To those of you getting freezes, I have identified the cause to be related to ZRAM. I will push an update today that will remove ZRAM tweaking from the script.
The reason I believe this is happening is because ktweak tries to resize the ZRAM. That requires all data that is currently in ZRAM to decompress and enter your main memory unit. If we run out of memory during this process, we will freeze.
The solution is to not adjust the zram size when using ktweak. Sorry for the inconveniences that may have been caused. I'll get straight to fixing this as soon as possible.
Click to expand...
Click to collapse
I ever disabled ZRAM on my OP7 PRO feels more fluid
Is working on mi 10 pro?
Vivo v9
How can I root vivo v9 1723?
I tried many methods but nothing work for me.
Please help
tyagis777 said:
How can I root vivo v9 1723?
I tried many methods but nothing work for me.
Please help
Click to expand...
Click to collapse
Really? You created an account for this post?

Categories

Resources