Overclocked the Verizon Galaxy S5

Actually, I’m not the first one to do this. It’s been done dozens of times before, and quite frankly, to higher levels. So, why do I make my own kernel overclocked? Well, I do think there is one difference in my kernel overclocking than the others. I don’t increase the voltage while adding faster processing power.

Typically, in my kernels I try to focus on battery over performance. However, I like getting the most bang for the buck, too. So, in my kernel, I increased the frequency by 3%, but did not increase the voltage any. This allows the maximum benefit for the same amount of voltage, which equals an increase in performance, without an extra drain on the battery.

Okay, at least not a noticeable drain on the battery. Ohms law is still true, so the Resistance will drop slightly as the frequency went up, because of slightly higher heat, microscopically decreasing the resistance and changing the formula. However, when you increase the voltage, the Power formula changes dramatically.

For example, some fictitious numbers for conceptualization:

Power = Current x Voltage Let’s say, (P) 12 = (I) 2 x (E) 6.

If we increase the Voltage, the change is drastic: (P) 14 = (I) 2 x (E) 7.

If we don’t increase the voltage, the change is microscopic, only because the change in frequency will ultimately increase the heat (very slightly). In ohm’s law, that is I=E/R, so our formula looks like this: Power = Voltage/Resistance x Voltage, or (P) 12 = ([E] 6 / [R] 3) x (E) 6. So if the heat rises microscopically, then the semiconductor resistance lowers microscopically*, then the power will only change microscopically. So, our new fictitious formula looks like this: (P) 12.04 = ([E] 6 / [R] 2.99) x (E) 6.

Either way, you can check out the commit here:

https://github.com/alaskalinuxuser/android_kernel_samsung_klte/commit/dd3e95708f0f4166486fc4341691826ea303d993

Linux – keep it simple.

*Normally, in wire, heat increases cause the resistance to increase. However:

“In a semi-conductor, there is an energy gap between the (filled) valence and the (empty) conduction band. At zero temperature, no charges are in the conduction band and the resistance should be infinite as the system behaves basically like an insulator. If you turn on the temperature, some electrons will start to occupy the conduction band and thus contribute to conduction, lowering the [resistance].”  https://physics.stackexchange.com/users/661/lagerbaer

 

CPU and GPU voltage control are back!

CPU and GPU voltage control are back! To God be the glory!

Since moving to Android Nougat, I have not been able to make GPU and CPU voltage control work for the Samsung Galaxy S4, but that changed today. You can see the commit here:

https://github.com/alaskalinuxuser/android_kernel_samsung_jf/commit/9544cab218a4348563067588b8b607d6e9d7ab11

Finally the end user has the ability to control the CPU and GPU voltage with apps like Kernel Adiutor and my GPU Voltage Control App.

You can find my GPU voltage control app here:

https://forum.xda-developers.com/galaxy-s4-tmobile/themes-apps/app-gpu-voltage-control-app-aklu-kernels-t3506985

Remember, changing your CPU or GPU voltages is dangerous. Don’t make drastic changes, and be careful!

Linux – keep it simple.

The easy guide to understanding CPU and GPU governors

There are a lot of guides out there about CPU and GPU governors. They are all very informative, and usually somewhat technical. The other day, however, I needed to explain CPU and GPU governors to a non-technical person, and I realized that a lot of big verbiage and vague terms get thrown around which help the technical, but cloud the issue for some. So, I decided to share my non-technical explanation here, in hopes that it will help others also. This is not a perfect analogy, nor a technical one, but I think it gets the job done.
Consider driving your car in the rain.
The more it rains, the less you can see. Hence the slower you should drive. Each swipe of the windshield wiper clears the window. The rain is like the CPU/GPU load, or how much work the CPU/GPU needs to do to clear the windshield.

Sometimes it is raining very hard. Sometimes it is a sprinkle. But in this make believe world, it is always raining.

As the driver of the car, you get to pick the speed at which the wipers swipe. It is entirely up to you, you are the governor, or the one in charge of the windshield wipers. You can’t control how much it rains, but you can control how fast your wipers swipe the windshield clear.

There are many types of drivers out there, just like there are many governors. However, they all fall into several basic categories.

The Performance type governors are like a driver who gets into the car, and he turns on the wipers full speed. Even if it is barely raining. No matter how much rain does, or does not fall on the windshield, he keeps the wipers going full blast. This works very well to keep the windshield clear. However it also makes the wiper blades get pretty hot when it is just a sprinkle. All the friction from swiping back and forth will eventually wear out his wiper blades, and he will replace them a lot sooner if he does this, but his windshield will stay clear!

The Powersave type of governors are like a driver who gets into her car and turns the wipers on at the lowest possible speed, with the longest delay between swipes. This works great if it is just sprinkling, but anytime she gets into a down pour, she has to practically come to a stop, because she can’t see where she is going through all that rain on her windshield.

The On Demand type governors, are the drivers who are always adjusting by wild amounts. “Oh, it’s raining harder, I will turn the wipers all the way up!” He says. “Oh, I guess that was too much. I’ll turn them down now.” Back and forth this driver goes, always pushing up to maximum wiper speed, only to find that he should probably turn the wiper speed down. Often, this guy is over-adjusting for the actual amount of rain, but his windshield stays pretty clear. Essentially, he has become his own delay module, waiting for rain on the windshield, he then turns on the wipers to clear it, then practically turns the wipers off. Then repeat, over and over again.

The Conservative type governors, however, take a more methodical approach. When she sees that it is raining harder, she turns her wiper speed up one notch. Then she waits a few seconds to see how well that works. Not enough wiper speed to keep the window clear? Then she will turn up the wiper speed another notch. Still not enough? Then she turns the wiper speed up another notch. One notch at a time, with a moment to check the results before making another correction. Then, if the wipers are too fast, she turns them down one notch. The process continues, constantly adjusting, one notch at a time. This can work very well. It can also be problematic if there is a huge change in the amount of rain. If it was sprinkling, and then starts to pour, it may be a while before she gets her wiper blades to “catch up” with the amount of rain.

Of course, there are Ideal frequency governors. This man has already determined that if it is raining, he will use a particular delay or speed setting for his wipers. He may adjust them slightly if it is raining more or less, but he will always strive to keep his wipers at the setting that he already determined was the one that he thought was best. Usually this setting is somewhere in the middle. Not really performance, not quite powersave, just somewhere in the middle that he thinks is best. This can work good for average use, but can be taxing your battery while “coasting” and can be less able to handle heavy rains (high work loads).

Then there are the dependent type of governors. This person gets into her car and begins to drive through the rain. Rather than setting her wipers based on her view out the window, this lady sets her wiper speed based on some other variable. For instance, the Intellidemand governor sets the CPU frequency based on GPU speed. This would be like a woman who sets her wiper speed based on her car’s speedometer. If she is driving faster, then she turns up the wiper speed. If she is driving slower, then she turns down her wiper speed. It makes sense, but works only as good as the thing controlling it. It seems like a good idea, if you drive faster, you need a clearer windshield, so you should wipe the rain away faster. The only issue here is that it may not be raining very hard, yet, because you are driving faster, you must set your wipers to swipe faster. Generally this works very well, but may not work in every condition. Some of these governors have other variables, such as “touch poke” where the CPU frequency, or wiper speed, goes up when you touch the screen. Much like turning up the wiper speed when the hi-beam headlights get turned on.

Essentially, all governors are like one or more, or some combination of, the above. Most simply fine tune the variables or combine some functions of the above to get the results they want. Some are designed for power saving, others for performance.
So, which one is the best?

Well, that depends. Is it raining really hard all the time? Does it barely rain at all? Is there just a good gully-wash once a day, and other than that it is just a sprinkle?

Lots of people argue over which one is the best, but it is really hard to quantify. The reality of it is not which governor is the “best” but which governor is the “best for you”. Only you know what your daily “rainfall” is like. Perhaps you are okay with driving a little slower in the rain. You might even enjoy it. Maybe you have ADD and need that windshield spotless all the time. I don’t know.

A few thoughts for you to consider though:

Do you need your phone to be battery friendly?
I get up at 4:30 AM every morning for work. I put my phone back on the charger every night around 8-9 PM. My battery only has to last 16-17 hours. Typically, based on my choices and daily use, I average about 50% battery life left when I put it on the charger. Based on my usage, I could choose a more performance driven governor, and not have to worry too much about battery usage. If you are always out and about, in the wild, hiking, camping, fishing, you may want to consider a more battery friendly option to extend your use for longer trips between chargers. My point being that it does you little good to choose so many battery friendly options that your phone lasts 6 days on a charge, but you plug it in every night.

Do you need your phone to out perform anybody?
I don’t play video games. I rarely watch youtube/hulu/netflix/vudu/(name your video streaming service here) on my phone. Do I need a performance based governor? Sure, the benchmarks are impressive, but does it really mater that you are clocking max speeds to check you email?

You don’t have to choose just one setting.
Many people get “stuck in a rut” (back to our car analogy) when it comes to settings like governors. With apps such as Kernel Adiutor, Device Control, and others, you can set profiles, or make backup configurations that you can quickly implement when you want to. If you hardly use your phone when you are at work, but become a gaming fiend at home. Maybe it is the opposite, at work you use your phone for serious number crunching with some math apps, and at home it just becomes your telephone. Choose a different option that suits your needs in each instance.

What do I use?
Well, again, it really depends on what you do day to day. For me, I play chess on my phone, read a Bible app, I check my emails (work and home), I browse the internet, Github, WordPress and XDA, take a few pictures of the kids, make a few phone calls, and send a few texts. That’s it. So for me, I use the Lionheart governor for my CPU. The vote is still out on GPU, but I have been testing the Simple GPU governor, and I think that it works pretty well.

Linux – keep it simple.

Overclocking the CPU on a Samsung Galaxy S4

I have been trying to follow every guide out there to overclock my Samsung Galaxy S4 (SGH-M919), and until now I have had absolutely no luck! It wasn’t until I saw some work on a kernel by Faux123 that I realized the difference in kernel setups. Once I tried it, by God’s grace, it worked! Granted, I couldn’t straight up implement Faux123’s work, because that was for a slightly different kernel and setup, but the overall ideas and information helped tremendously! Knowing that I had so much trouble makes me want to share it here, so others can use this information.

The reason I was having so much trouble, is the guides I was reviewing had a different structure for thier chips controls that that of the S4. Overall it works the same, but the syntax and setup were different, causing me to create never ending bootloop kernels that couldn’t do anything, let alone be faster versions of thier predecessor!

Ultimately, overclocking your cpu, for the S4 at least, comes down to these 2 files:

kernel_samsung_jf/arch/arm/mach-msm/acpuclock-8064.c
kernel_samsung_jf/arch/arm/mach-msm/cpufreq.c

Granted, I am still new at this, and there are probably better and fine tune ways to do this, but here is how I did it. In the acpuclock-8064.c file, there were many tables of cpu frequencies for different levels and conditions. Here is the pre-overclocked version:

[CODE]
static struct acpu_level tbl_PVS6_2000MHz[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 875000 },
{ 1, { 486000, HFPLL, 2, 0x24 }, L2(5), 875000 },
{ 1, { 594000, HFPLL, 1, 0x16 }, L2(5), 875000 },
{ 1, { 702000, HFPLL, 1, 0x1A }, L2(5), 875000 },
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 887500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 900000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 925000 },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 937500 },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 950000 },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 962500 },
{ 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 975000 },
{ 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1000000 },
{ 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1025000 },
{ 1, { 1782000, HFPLL, 1, 0x42 }, L2(14), 1062500 },
{ 1, { 1890000, HFPLL, 1, 0x46 }, L2(14), 1100000 },
[/CODE]

The second column lists the frequency that is available, and the last column tells the cpu how much voltage to use to make it work. Here is where I did some basic math. I wanted to overclock the last variable to be more than 1890000, but how much was too much, and what numbers were appropriate? Faux123’s work didn’t tell me that, but looking over the work that Faux123 did showed me where to make the edits. Essentially, I decided that these must be based on some sort of mathmatical principle. Turns out that I must be right.

1890000-1789000=108000
1789000-1674000=108000
1674000-1566000=108000
etc….

So, I concluded that the frequency changes *should* be made in 108MHz jumps. So, by extrapolation:

1890000+108000=1998000

This then, gave me the new target frequency of 1998000!

Then there was a question of voltage. Maximum and minimum voltage is set at the beginning of the file, like so:

[CODE]
[CPU0] = {
.hfpll_phys_base = 0x00903200,
.aux_clk_sel_phys = 0x02088014,
.aux_clk_sel = 3,
.sec_clk_sel = 2,
.l2cpmr_iaddr = 0x4501,
.vreg[VREG_CORE] = { “krait0”, 1300000 },
.vreg[VREG_MEM] = { “krait0_mem”, 1150000 },
.vreg[VREG_DIG] = { “krait0_dig”, 1150000 },
.vreg[VREG_HFPLL_A] = { “krait0_hfpll”, 1800000 },
},
[/CODE]

I am not going to lie to you and tell you that I understand what all of that means, but what I can tell you is that VREG_CORE seems to be the maximum voltage you can apply to that core. In the case of my overclocking expirement, I twiddled with several variations, but eventually decided to leave the max where it was.

Back to our example, I decided to go with this:

[CODE]
static struct acpu_level tbl_PVS6_2000MHz[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 875000 },
{ 1, { 486000, HFPLL, 2, 0x24 }, L2(5), 875000 },
{ 1, { 594000, HFPLL, 1, 0x16 }, L2(5), 875000 },
{ 1, { 702000, HFPLL, 1, 0x1A }, L2(5), 875000 },
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 887500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 900000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 925000 },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 937500 },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 950000 },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 962500 },
{ 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 975000 },
{ 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1000000 },
{ 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1025000 },
{ 1, { 1782000, HFPLL, 1, 0x42 }, L2(14), 1062500 },
{ 1, { 1998000, HFPLL, 1, 0x46 }, L2(14), 1100000 },
{ 0, { 0 } }
[/CODE]

Notice that I didn’t increase the voltage at all. Actually, after several tests, I found that I could do more with less! In this case (probably not every case), I could increase the frequency, and esentially undervolt it to use the same voltage to do more work. Before finding this, however, I had worked out a voltage system:

1100000-1062500=37500
1062500-1025000=37500
1025000-1000000=25000
1000000-975000=25000
975000-962500=12500
etc….

It would appear that the voltages increment upwards as the frequency increases. Using this theory and table, I presumed that the next voltage would be +50000. It seems to go 2 steps at the same increase, and then incrementally increase by 12500. So, either the next step was a continuation of +37500, or an additional +37500+12500, which would be +50000. If I was increasing the voltage, these would be some safe steps, provided that the accumulative answer did not progress above my max limit of 1300000, defined earlier. If it does need to exceed that, then I need to also increase max voltages. In either event, I must be wrong about something, or this is a great undervoltage example, because without increasing the voltage, the kernel did overclock by 108MHz.

Now there is another step to do. We still have to edit the cpufreq.c file. Fortunately, Faux123’s work pointed me right to this, and it was a simple change. All one has to do, is write in the new upper_limit_freq, which is the new maximum frequency:

[CODE]
static unsigned int upper_limit_freq = 1566000;
#else
static unsigned int upper_limit_freq = 1998000;
#endif
[/CODE]

Pretty slick, huh! I am really glad people like Faux123 and Github exist, so people who need help (like me) can review their code and find where one has to make changes to do things like overclocking. All I did after editing these two files was compile the new kernel. Granted, I did play around a bit with some other voltages/frequencies, but this was realitively simple and worked great compared to the other failed attempts to follow the overclocking guides on the web!

Linux – keep it simple.