|
| 1 | +# Essential JVM Heap Settings |
| 2 | + |
| 3 | +!!! important |
| 4 | + [Original article](https://medium.com/itnext/essential-jvm-heap-settings-what-every-java-developer-should-know-b1e10f70ffd9?sk=24f9f45adabf009d9ccee90101f5519f) (source) |
| 5 | + |
| 6 | +JVM Heap optimization in newer Java versions is highly advanced and container-ready. |
| 7 | +This is great to quickly get an application in production without having to deal with |
| 8 | +various JVM heap related flags. But the default JVM heap and GC settings might surprise |
| 9 | +you. Know them before your first OOMKilled encounter. |
| 10 | + |
| 11 | +!!! tip "" |
| 12 | + You need to be on Java 9+ for anything written below to be applicable. Still on Java 8? |
| 13 | + Time to upgrade Java or job… |
| 14 | + |
| 15 | +### Running your Java application under the layers of Container or Kubernetes? The environment variable JAVA_TOOL_OPTIONS is your friend |
| 16 | + |
| 17 | +If you are running in a constrained environment with limited access to modify the comand |
| 18 | +`java -jar ...`, don’t worry it is very easy to pass in custom JVM flags. Just set |
| 19 | +the environment variable `JAVA_TOOL_OPTIONS` and it will be automatically picked up by |
| 20 | +the JDK. This is true for OpenJDK and its variants like RedHat. If you are using a |
| 21 | +different JDK, check documentation for an equivalent variable. |
| 22 | + |
| 23 | +You will see a log line as below during startup: |
| 24 | + |
| 25 | +``` |
| 26 | +Picked up JAVA_TOOL_OPTIONS: -XX:SharedArchiveFile=application.jsa -XX:MaxRAMPercentage=80 |
| 27 | +``` |
| 28 | + |
| 29 | +Be aware that if you have multiple JVM applications running, setting the environment |
| 30 | +variable might affect all of them. |
| 31 | + |
| 32 | +### No idea what heap size or JVM flags are active? Use -XX:+PrintCommandLineFlags |
| 33 | + |
| 34 | +Unless you have explicitly set the `-Xmx/-Xms` flags, you probably have no idea about |
| 35 | +the available heap size. Metrics may give a hint but that is a lagging indicator. |
| 36 | +Set the flag `-XX:+PrintCommandLineFlags` to force the JVM to print all active flags |
| 37 | +at startup. |
| 38 | + |
| 39 | +It would look something like this: |
| 40 | + |
| 41 | +``` |
| 42 | +-XX:InitialHeapSize=16777216 -XX:MaxHeapSize=858993459 -XX:MaxRAM=1073741824 -XX:MaxRAMPercentage=80.000000 -XX:MinHeapSize=6815736 -XX:+PrintCommandLineFlags -XX:ReservedCodeCacheSize=251658240 -XX:+SegmentedCodeCache -XX:SharedArchiveFile=application.jsa -XX:-THPStackMitigation -XX:+UseCompressedOops -XX:+UseSerialGC |
| 43 | +``` |
| 44 | + |
| 45 | +This is useful to gain insights on your current JVM setup. |
| 46 | + |
| 47 | +Another flag `-XX:+PrintFlagsFinal` shows every flag including defaults. But it might |
| 48 | +be overkill to include in every application startup. If your Java application is |
| 49 | +wrapped inside a container image, this command is a quick way to see the JVM flags that |
| 50 | +will be applied: `docker run --rm --entrypoint java myimage:latest -XX:+PrintFlagsFinal -version` |
| 51 | + |
| 52 | +### I am applying container memory limits. Do I need to set heap flags? |
| 53 | + |
| 54 | +It depends. For a typical application not requiring optimizations, the default behaviour |
| 55 | +would be fine. JVM will automatically apply a percentage of available memory as maximum |
| 56 | +heap size. Just make sure to leave some space for non-heap stuff, sidecars, agents, etc. |
| 57 | +How much? Read on. |
| 58 | + |
| 59 | +### By default only 25% of available memory is used as max heap! |
| 60 | + |
| 61 | +With many JDK vendors, a container with 1GB memory limit will only get 256 MB of maximum |
| 62 | +heap size. This is due to the default flag `-XX:MaxRAMPercentage=25` set. |
| 63 | +This conservative number made sense back in the non-container days when multiple JVMs would run |
| 64 | +on the same machine. But when running in containers with memory limits set correctly, |
| 65 | +this value can be increased to 60, 70 or even 80% depending on the application’s non-heap |
| 66 | +memory usage like byte buffers, page cache etc. |
| 67 | + |
| 68 | +``` |
| 69 | +> docker run --memory 2g openjdk:24 java -XX:+PrintFlagsFinal -version | grep MaxRAMPercentage |
| 70 | + double MaxRAMPercentage = 25.000000 {product} {default} |
| 71 | +``` |
| 72 | + |
| 73 | +### Garbage Collection algorithm changes depending on available memory |
| 74 | + |
| 75 | +Since Java 9, G1 is the default garbage collection algorithm replacing Parallel GC in |
| 76 | +previous versions. But there is a caveat! This applies only if available memory (not heap |
| 77 | +size) is at least 2 GB. Below 2 GB, serial GC is the default algorithm. |
| 78 | + |
| 79 | +``` |
| 80 | +> docker run --memory 1g openjdk:24 java -XX:+PrintFlagsFinal -version | grep -E "UseSerialGC | UseG1GC" |
| 81 | + bool UseG1GC = false {product} {default} |
| 82 | + bool UseSerialGC = true {product} {ergonomic} |
| 83 | +
|
| 84 | +> docker run --memory 2g openjdk:24 java -XX:+PrintFlagsFinal -version | grep -E "UseSerialGC | UseG1GC" |
| 85 | + bool UseG1GC = true {product} {ergonomic} |
| 86 | + bool UseSerialGC = false {product} {default} |
| 87 | +``` |
| 88 | + |
| 89 | +This is likely due to the fact that G1 GC carries overhead of metadata and its own bookkeeping |
| 90 | +which outweighs the benefits in low-memory applications. |
| 91 | + |
| 92 | +You can always set your own GC algorithm with flags like `-XX:+UseG1GC` and `-XX:+UseSerialGC`. |
| 93 | + |
| 94 | +### Kubernetes pods memory limit affect heap sizes |
| 95 | + |
| 96 | +The memory limit set on the pod affect the heap size calculations. The memory request |
| 97 | +has no impact. It only affects the scheduling of the pod on a node. |
| 98 | + |
| 99 | +### JVM flag UseContainerSupport is not necessary |
| 100 | + |
| 101 | +Since Java 10+, the JVM flag UseContainerSupport is available and always enabled by default. |
| 102 | + |
| 103 | +``` |
| 104 | +> docker run --memory 1g openjdk:24 java -XX:+PrintFlagsFinal -version | grep UseContainerSupport |
| 105 | + bool UseContainerSupport = true {product} {default} |
| 106 | +``` |
| 107 | + |
| 108 | +### Common Heap Regions |
| 109 | + |
| 110 | +The JVM heap space is broadly divided into two regions or generations — Young and Old |
| 111 | +generation. The Young generation is further divided into Eden and Survivor space. |
| 112 | +The survivor space consists of two equally divided spaces S0 and S1. |
| 113 | + |
| 114 | +A newly created object is born in the Eden space. If it survives one or two garbage |
| 115 | +collections, it is promoted to the Survivor space. If it survives even more garbage |
| 116 | +collections, it is considered an elder and promoted to the Tenured or Old space. |
| 117 | + |
| 118 | +``` |
| 119 | +Total heap size = Eden space + Survivor space + Tenured space |
| 120 | +``` |
| 121 | + |
| 122 | +### Metrics Gotchas for Serial and G1 GC |
| 123 | + |
| 124 | +Typical Heap monitoring view for Serial GC |
| 125 | + |
| 126 | +[](https://channel.io "Typical Heap monitoring view for Serial GC") |
| 127 | + |
| 128 | +When the available memory is less than 2 GB and Serial GC is active, the max sizes of |
| 129 | +Eden, Survivor and Tenured spaces will be fixed. The size of Young generation |
| 130 | +(Eden + Survivor) is determined by `MaxNewSize` which usually defaults to 1/3rd of the |
| 131 | +max heap size. Within the young generation, the sizing of Eden and Survivor is |
| 132 | +determined via `NewRatio` and `SurvivorRatio`. These default to 2 and 8 respectively in |
| 133 | +OpenJDK. Effectively, old generation will be twice the size of young generation and |
| 134 | +the Survivor space is 1/8th the size of Eden space. |
| 135 | + |
| 136 | +``` |
| 137 | +Heap breakup under 2 GB / Serial GC |
| 138 | +
|
| 139 | +Container memory limit = 1 GB |
| 140 | +|_ Max heap size = 256 MB (25%) |
| 141 | + |_ Young generation =~ 85 MB (1/3 of heap size) |
| 142 | + |_ Eden space =~ 76 MB (85 * 8/9) |
| 143 | + |_ Survivor space =~ 9 MB (85 * 1/9, S0 = 4.5 MB, S1 = 4.5 MB) |
| 144 | + |_ Old generation =~ 171 MB (max heap size - young generation) |
| 145 | +``` |
| 146 | + |
| 147 | +These numbers would approximately reflect in the JVM heap metrics. |
| 148 | + |
| 149 | +Typical Heap monitoring view for G1 GC |
| 150 | + |
| 151 | +[](https://channel.io "Typical Heap monitoring view for G1 GC") |
| 152 | + |
| 153 | +The most striking difference in metrics for G1 GC compared to Serial GC is that the |
| 154 | +max sizes of Eden and Survivor spaces show as zero. This is because in G1 GC, the size |
| 155 | +of these spaces are not fixed and is resized after every GC cycle. This can be |
| 156 | +confusing in the metrics as the values are non-zero while max is zero. The flags |
| 157 | +`MaxNewSize`, `NewRatio` and `SurvivorRatio` apply to generational GCs like Serial and |
| 158 | +Parallel only and not G1. |
| 159 | + |
| 160 | +``` |
| 161 | +Heap breakup over 2 GB / G1 GC |
| 162 | +
|
| 163 | +Container memory limit = 2GB |
| 164 | +|_ Max heap size = 512 MB (25%) |
| 165 | + |_ Young generation =~ Adaptive |
| 166 | + |_ Eden space =~ Adaptive / -1 as reported by metrics |
| 167 | + |_ Survivor space =~ Adaptive / -1 as reported by metrics |
| 168 | + |_ Old generation =~ Adaptive / 512 MB as reported by metrics |
| 169 | +``` |
| 170 | + |
| 171 | +### Metaspace and Compressed Class Space |
| 172 | + |
| 173 | +Misleading Metaspace and Compressed Class Space metric |
| 174 | + |
| 175 | +[](https://channel.io "Misleading Metaspace and Compressed Class Space metric") |
| 176 | + |
| 177 | +Outside of heap, an important memory region is the Metaspace which stores information |
| 178 | +about loaded classes, methods, fields, annotations, constants, and JIT code. The size |
| 179 | +of Metaspace is determined by the flag `MaxMetaspaceSize` which is by default unlimited. |
| 180 | +It can use all native memory outside of heap and within the available memory. If |
| 181 | +usage goes beyond this, you would see `java.lang.OutOfMemoryError: Metaspace`. Large |
| 182 | +number of loaded classes will increase the Metaspace usage. |
| 183 | + |
| 184 | +Compressed Class Space stores ordinary object pointers ([oops](https://wiki.openjdk.org/display/HotSpot/CompressedOops)) to Java objects by |
| 185 | +compressing them from 64 to 32-bit offsets thereby saving some valuable memory space. |
| 186 | +More importantly, it is a sub-region of the Metaspace. The metrics report the size of |
| 187 | +Compressed Class space as 1 GB since the flag `CompressedClassSpaceSize` is set to 1 GB |
| 188 | +by default irrespective of available memory. It is not allocated unless needed. But |
| 189 | +since this is a sub-region of Metaspace, setting `MaxMetaspaceSize` is enough. |
| 190 | + |
| 191 | +### Reserved Code Cache |
| 192 | + |
| 193 | +Different regions of the JVM’s code cache |
| 194 | + |
| 195 | +[](https://channel.io "Different regions of the JVM’s code cache") |
| 196 | + |
| 197 | +This is the memory space outside heap that stores the native code generated by |
| 198 | +Just-In-Time (JIT) compiler. |
| 199 | + |
| 200 | +Java source code is compiled into Java binary code which is executed by the JVM. |
| 201 | +JVM interprets the binary code into OS-specific machine code line-by-line upon every |
| 202 | +execution. While this is enough, it would be very slow. The JIT compiler identifies |
| 203 | +hotspots (code paths that are frequently accessed), compiles them into native code |
| 204 | +and stores it in the Reserved Code Cache. The next time the hot code path requires |
| 205 | +execution, no interpretation is needed as the corresponding native code is directly |
| 206 | +invoked. |
| 207 | + |
| 208 | +> Interpreter is like asking a professional translator to translate a phrase in an |
| 209 | +> unknown language into a familiar language every time without learning. |
| 210 | +> |
| 211 | +> JIT compilation is like learning the frequently used phrases in the unknown language |
| 212 | +> to not rely on the translator all the time. |
| 213 | +> |
| 214 | +> AOT compilation is like learning the complete language beforehand and never needing |
| 215 | +> the translator. |
| 216 | +
|
| 217 | +By default, the code cache is segmented into multiple regions for optimization. |
| 218 | +These regions include `non-nmethods` (unrelated to user code, internal to JIT compiler), |
| 219 | +`non-profiled nmethods` (native methods that have not been profiled yet) and `profiled |
| 220 | +nmethods` (native methods that have been aggressively optimized). The total size of |
| 221 | +reserved code cache is defined via the flag `ReservedCodeCacheSize` and defaults to |
| 222 | +240 MB since Java 10. |
| 223 | + |
| 224 | +### Conclusion |
| 225 | +While there is much more to study in this area, I consider the things listed here |
| 226 | +as must-know for every Java developer. The next time you encounter OOM errors, |
| 227 | +you can check the JVM metrics and be able to immediately gather relevant information. |
0 commit comments