Skip to content

Commit 6c3fcf5

Browse files
committed
improve transferRequest scope doc;
describe inject once junit extension feature; add jvm heap reference article
1 parent a8a8eec commit 6c3fcf5

File tree

7 files changed

+319
-0
lines changed

7 files changed

+319
-0
lines changed

dropwizard-guicey/src/doc/docs/guide/guice/scopes.md

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -117,6 +117,62 @@ public class RequestBean {
117117

118118
Such additional call is not required for pure guice-managed request scope objects.
119119

120+
Note that you **can't** use scope inside the scope (e.g. call one transferRequest action inside another).
121+
If you need to transfer scope to another sub-thread (3rd thread), make sure that
122+
the transferRequest action will be called after the current action closes.
123+
124+
!!! hint
125+
The current scope object is stored in thread locale (for http it's `GuiceFilter.localContext`).
126+
Each time you call `ServletScopes.transferRequest` it gets this context from thread local.
127+
This means all transferRequest actions, created in the current thread (or inside such action)
128+
will use THE SAME context instance.
129+
130+
There is a simple `ReentrantLock` in context which prevents simultaneous context usage
131+
from multiple threads (locks scope opening). So if you're going to spawn and wait for another thread, calling
132+
transferRequest actions inside current action, you'll get a dead lock.
133+
134+
Pay attention, that spawing a new thread, using request context from current scope is
135+
completely normal as long as you don't wait for the result (lock will release as soon as
136+
your current context will close).
137+
138+
If you need to spawn a new thread (requiring request scope) and wait for its result within `trasferRequest` scope,
139+
you can prepare several transfer actions ahead of time (separate action for each thread):
140+
141+
```java
142+
// action for the first thread
143+
final Callable<String> action1 = ServletScopes.transferRequest(...);
144+
// action for the sub-furst thread
145+
final Callable<String> action2 = ServletScopes.transferRequest(...);
146+
147+
// note: both actions share the same context instance
148+
149+
CompletableFuture.supplyAsync(() -> {
150+
action1.call();
151+
CompletableFuture.supplyAsync(() -> {
152+
action2.call();
153+
}).join();
154+
}).join()
155+
```
156+
157+
or returning another scoped action from the first one (if the first thread result must be used in the third thread):
158+
159+
```java
160+
final Callable<Callable<String>> action1 = ServletScopes.transferRequest(() -> {
161+
// do something and then action for sub-thread
162+
return ServletScopes.transferRequest(...);
163+
});
164+
165+
// in this case the context instance will also be THE SAME
166+
167+
CompletableFuture.supplyAsync(() -> {
168+
final Callable<String> action2 = action1.call();
169+
CompletableFuture.supplyAsync(() -> {
170+
action2.call();
171+
}).join();
172+
}).join()
173+
```
174+
175+
120176
### Request scope simulation
121177

122178
Sometimes, request scoped beans may need to be used somewhere without request (for example,

dropwizard-guicey/src/doc/docs/guide/test/junit5/run.md

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -111,6 +111,42 @@ static TestGuiceyAppExtension ext = TestGuiceyAppExtension.forApp(..)
111111
Application lifecycle will remain: events like `onApplicationStartup` would still be
112112
working (and all registered `LifeCycle` objects would work). Only managed objects ignored.
113113

114+
### Inject test fields once
115+
116+
By default, guicey would inject test field values before every test method, even if the same
117+
test instance used (`TestInstance.Lifecycle.PER_CLASS`). This should not be a problem
118+
in the majority of cases because guice injection takes very little time.
119+
Also, it is important for prototype beans, which will be refreshed for each test.
120+
121+
But it is possible to inject fields just once:
122+
123+
```java
124+
@TestGuiceyApp(value = App.class, injectOnce = true)
125+
// by default new test instance used for each method, so injectOnce option would be useless
126+
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
127+
public class PerClassInjectOnceGuiceyTest {
128+
@Inject
129+
Bean bean;
130+
131+
@Test
132+
public test1() {..}
133+
134+
@Test
135+
public test2() {..}
136+
}
137+
```
138+
139+
In this case, the same test instance used for both methods (`Lifecycle.PER_CLASS`)
140+
and `Bean bean` field would be injected just once (`injectOnce = true`)
141+
142+
!!! tip
143+
To check the actual fields injection time enable debug (`debug = true`) and
144+
it will [print injection time](debug.md#startup-performance) before each test method:
145+
```
146+
[Before each] : 2.05 ms
147+
Guice fields injection : 1.58 ms
148+
```
149+
114150
## Testing web logic
115151

116152
`@TestDropwizardApp` is useful for complete integration testing (when web part is required):
48.8 KB
Loading
52.9 KB
Loading
42.4 KB
Loading
61.3 KB
Loading
Lines changed: 227 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,227 @@
1+
# Essential JVM Heap Settings
2+
3+
!!! important
4+
[Original article](https://medium.com/itnext/essential-jvm-heap-settings-what-every-java-developer-should-know-b1e10f70ffd9?sk=24f9f45adabf009d9ccee90101f5519f) (source)
5+
6+
JVM Heap optimization in newer Java versions is highly advanced and container-ready.
7+
This is great to quickly get an application in production without having to deal with
8+
various JVM heap related flags. But the default JVM heap and GC settings might surprise
9+
you. Know them before your first OOMKilled encounter.
10+
11+
!!! tip ""
12+
You need to be on Java 9+ for anything written below to be applicable. Still on Java 8?
13+
Time to upgrade Java or job…
14+
15+
### Running your Java application under the layers of Container or Kubernetes? The environment variable JAVA_TOOL_OPTIONS is your friend
16+
17+
If you are running in a constrained environment with limited access to modify the comand
18+
`java -jar ...`, don’t worry it is very easy to pass in custom JVM flags. Just set
19+
the environment variable `JAVA_TOOL_OPTIONS` and it will be automatically picked up by
20+
the JDK. This is true for OpenJDK and its variants like RedHat. If you are using a
21+
different JDK, check documentation for an equivalent variable.
22+
23+
You will see a log line as below during startup:
24+
25+
```
26+
Picked up JAVA_TOOL_OPTIONS: -XX:SharedArchiveFile=application.jsa -XX:MaxRAMPercentage=80
27+
```
28+
29+
Be aware that if you have multiple JVM applications running, setting the environment
30+
variable might affect all of them.
31+
32+
### No idea what heap size or JVM flags are active? Use -XX:+PrintCommandLineFlags
33+
34+
Unless you have explicitly set the `-Xmx/-Xms` flags, you probably have no idea about
35+
the available heap size. Metrics may give a hint but that is a lagging indicator.
36+
Set the flag `-XX:+PrintCommandLineFlags` to force the JVM to print all active flags
37+
at startup.
38+
39+
It would look something like this:
40+
41+
```
42+
-XX:InitialHeapSize=16777216 -XX:MaxHeapSize=858993459 -XX:MaxRAM=1073741824 -XX:MaxRAMPercentage=80.000000 -XX:MinHeapSize=6815736 -XX:+PrintCommandLineFlags -XX:ReservedCodeCacheSize=251658240 -XX:+SegmentedCodeCache -XX:SharedArchiveFile=application.jsa -XX:-THPStackMitigation -XX:+UseCompressedOops -XX:+UseSerialGC
43+
```
44+
45+
This is useful to gain insights on your current JVM setup.
46+
47+
Another flag `-XX:+PrintFlagsFinal` shows every flag including defaults. But it might
48+
be overkill to include in every application startup. If your Java application is
49+
wrapped inside a container image, this command is a quick way to see the JVM flags that
50+
will be applied: `docker run --rm --entrypoint java myimage:latest -XX:+PrintFlagsFinal -version`
51+
52+
### I am applying container memory limits. Do I need to set heap flags?
53+
54+
It depends. For a typical application not requiring optimizations, the default behaviour
55+
would be fine. JVM will automatically apply a percentage of available memory as maximum
56+
heap size. Just make sure to leave some space for non-heap stuff, sidecars, agents, etc.
57+
How much? Read on.
58+
59+
### By default only 25% of available memory is used as max heap!
60+
61+
With many JDK vendors, a container with 1GB memory limit will only get 256 MB of maximum
62+
heap size. This is due to the default flag `-XX:MaxRAMPercentage=25` set.
63+
This conservative number made sense back in the non-container days when multiple JVMs would run
64+
on the same machine. But when running in containers with memory limits set correctly,
65+
this value can be increased to 60, 70 or even 80% depending on the application’s non-heap
66+
memory usage like byte buffers, page cache etc.
67+
68+
```
69+
> docker run --memory 2g openjdk:24 java -XX:+PrintFlagsFinal -version | grep MaxRAMPercentage
70+
double MaxRAMPercentage = 25.000000 {product} {default}
71+
```
72+
73+
### Garbage Collection algorithm changes depending on available memory
74+
75+
Since Java 9, G1 is the default garbage collection algorithm replacing Parallel GC in
76+
previous versions. But there is a caveat! This applies only if available memory (not heap
77+
size) is at least 2 GB. Below 2 GB, serial GC is the default algorithm.
78+
79+
```
80+
> docker run --memory 1g openjdk:24 java -XX:+PrintFlagsFinal -version | grep -E "UseSerialGC | UseG1GC"
81+
bool UseG1GC = false {product} {default}
82+
bool UseSerialGC = true {product} {ergonomic}
83+
84+
> docker run --memory 2g openjdk:24 java -XX:+PrintFlagsFinal -version | grep -E "UseSerialGC | UseG1GC"
85+
bool UseG1GC = true {product} {ergonomic}
86+
bool UseSerialGC = false {product} {default}
87+
```
88+
89+
This is likely due to the fact that G1 GC carries overhead of metadata and its own bookkeeping
90+
which outweighs the benefits in low-memory applications.
91+
92+
You can always set your own GC algorithm with flags like `-XX:+UseG1GC` and `-XX:+UseSerialGC`.
93+
94+
### Kubernetes pods memory limit affect heap sizes
95+
96+
The memory limit set on the pod affect the heap size calculations. The memory request
97+
has no impact. It only affects the scheduling of the pod on a node.
98+
99+
### JVM flag UseContainerSupport is not necessary
100+
101+
Since Java 10+, the JVM flag UseContainerSupport is available and always enabled by default.
102+
103+
```
104+
> docker run --memory 1g openjdk:24 java -XX:+PrintFlagsFinal -version | grep UseContainerSupport
105+
bool UseContainerSupport = true {product} {default}
106+
```
107+
108+
### Common Heap Regions
109+
110+
The JVM heap space is broadly divided into two regions or generations — Young and Old
111+
generation. The Young generation is further divided into Eden and Survivor space.
112+
The survivor space consists of two equally divided spaces S0 and S1.
113+
114+
A newly created object is born in the Eden space. If it survives one or two garbage
115+
collections, it is promoted to the Survivor space. If it survives even more garbage
116+
collections, it is considered an elder and promoted to the Tenured or Old space.
117+
118+
```
119+
Total heap size = Eden space + Survivor space + Tenured space
120+
```
121+
122+
### Metrics Gotchas for Serial and G1 GC
123+
124+
Typical Heap monitoring view for Serial GC
125+
126+
[![Monitoring 1](../img/jvm/jvm1.webp)](https://channel.io "Typical Heap monitoring view for Serial GC")
127+
128+
When the available memory is less than 2 GB and Serial GC is active, the max sizes of
129+
Eden, Survivor and Tenured spaces will be fixed. The size of Young generation
130+
(Eden + Survivor) is determined by `MaxNewSize` which usually defaults to 1/3rd of the
131+
max heap size. Within the young generation, the sizing of Eden and Survivor is
132+
determined via `NewRatio` and `SurvivorRatio`. These default to 2 and 8 respectively in
133+
OpenJDK. Effectively, old generation will be twice the size of young generation and
134+
the Survivor space is 1/8th the size of Eden space.
135+
136+
```
137+
Heap breakup under 2 GB / Serial GC
138+
139+
Container memory limit = 1 GB
140+
|_ Max heap size = 256 MB (25%)
141+
|_ Young generation =~ 85 MB (1/3 of heap size)
142+
|_ Eden space =~ 76 MB (85 * 8/9)
143+
|_ Survivor space =~ 9 MB (85 * 1/9, S0 = 4.5 MB, S1 = 4.5 MB)
144+
|_ Old generation =~ 171 MB (max heap size - young generation)
145+
```
146+
147+
These numbers would approximately reflect in the JVM heap metrics.
148+
149+
Typical Heap monitoring view for G1 GC
150+
151+
[![Monitoring 2](../img/jvm/jvm2.webp)](https://channel.io "Typical Heap monitoring view for G1 GC")
152+
153+
The most striking difference in metrics for G1 GC compared to Serial GC is that the
154+
max sizes of Eden and Survivor spaces show as zero. This is because in G1 GC, the size
155+
of these spaces are not fixed and is resized after every GC cycle. This can be
156+
confusing in the metrics as the values are non-zero while max is zero. The flags
157+
`MaxNewSize`, `NewRatio` and `SurvivorRatio` apply to generational GCs like Serial and
158+
Parallel only and not G1.
159+
160+
```
161+
Heap breakup over 2 GB / G1 GC
162+
163+
Container memory limit = 2GB
164+
|_ Max heap size = 512 MB (25%)
165+
|_ Young generation =~ Adaptive
166+
|_ Eden space =~ Adaptive / -1 as reported by metrics
167+
|_ Survivor space =~ Adaptive / -1 as reported by metrics
168+
|_ Old generation =~ Adaptive / 512 MB as reported by metrics
169+
```
170+
171+
### Metaspace and Compressed Class Space
172+
173+
Misleading Metaspace and Compressed Class Space metric
174+
175+
[![Monitoring 3](../img/jvm/jvm3.webp)](https://channel.io "Misleading Metaspace and Compressed Class Space metric")
176+
177+
Outside of heap, an important memory region is the Metaspace which stores information
178+
about loaded classes, methods, fields, annotations, constants, and JIT code. The size
179+
of Metaspace is determined by the flag `MaxMetaspaceSize` which is by default unlimited.
180+
It can use all native memory outside of heap and within the available memory. If
181+
usage goes beyond this, you would see `java.lang.OutOfMemoryError: Metaspace`. Large
182+
number of loaded classes will increase the Metaspace usage.
183+
184+
Compressed Class Space stores ordinary object pointers ([oops](https://wiki.openjdk.org/display/HotSpot/CompressedOops)) to Java objects by
185+
compressing them from 64 to 32-bit offsets thereby saving some valuable memory space.
186+
More importantly, it is a sub-region of the Metaspace. The metrics report the size of
187+
Compressed Class space as 1 GB since the flag `CompressedClassSpaceSize` is set to 1 GB
188+
by default irrespective of available memory. It is not allocated unless needed. But
189+
since this is a sub-region of Metaspace, setting `MaxMetaspaceSize` is enough.
190+
191+
### Reserved Code Cache
192+
193+
Different regions of the JVM’s code cache
194+
195+
[![Monitoring 4](../img/jvm/jvm4.webp)](https://channel.io "Different regions of the JVM’s code cache")
196+
197+
This is the memory space outside heap that stores the native code generated by
198+
Just-In-Time (JIT) compiler.
199+
200+
Java source code is compiled into Java binary code which is executed by the JVM.
201+
JVM interprets the binary code into OS-specific machine code line-by-line upon every
202+
execution. While this is enough, it would be very slow. The JIT compiler identifies
203+
hotspots (code paths that are frequently accessed), compiles them into native code
204+
and stores it in the Reserved Code Cache. The next time the hot code path requires
205+
execution, no interpretation is needed as the corresponding native code is directly
206+
invoked.
207+
208+
> Interpreter is like asking a professional translator to translate a phrase in an
209+
> unknown language into a familiar language every time without learning.
210+
>
211+
> JIT compilation is like learning the frequently used phrases in the unknown language
212+
> to not rely on the translator all the time.
213+
>
214+
> AOT compilation is like learning the complete language beforehand and never needing
215+
> the translator.
216+
217+
By default, the code cache is segmented into multiple regions for optimization.
218+
These regions include `non-nmethods` (unrelated to user code, internal to JIT compiler),
219+
`non-profiled nmethods` (native methods that have not been profiled yet) and `profiled
220+
nmethods` (native methods that have been aggressively optimized). The total size of
221+
reserved code cache is defined via the flag `ReservedCodeCacheSize` and defaults to
222+
240 MB since Java 10.
223+
224+
### Conclusion
225+
While there is much more to study in this area, I consider the things listed here
226+
as must-know for every Java developer. The next time you encounter OOM errors,
227+
you can check the JVM metrics and be able to immediately gather relevant information.

0 commit comments

Comments
 (0)