Skip to main content

About spark metrics

spark is able to report and calculate a number of different metrics.

In all cases, the source data for the metrics comes from elsewhere. If something seems wrong, it is likely because the raw data spark receives is incorrect.

Metric NameData Source
TPSServer event (via spark's TickHook interface)
MSPTServer event (via spark's TickReporter interface)
CPU UsageJava API (jdk.management/OperatingSystemMXBean)
Memory UsageJava API (jdk.management/OperatingSystemMXBean) & /proc/meminfo (Linux only)
Disk UsageJava API (java.base/FileStore)
GCJava API (jdk.management/GarbageCollectorMXBean)
Network Usage/proc/net/dev (Linux only)
Player PingServer API (via spark's PlayerPingProvider interface)
CPU Name/proc/cpuinfo on Linux, wmic cpu on Windows
OS name and version/etc/os-release on Linux, wmic os on Windows

Containers and Docker

Occasionally, we see some metrics (mostly CPU/Memory Usage) being misreported when the server (and by extension spark) is running inside a container (Pterodactyl, etc.).

There's not much spark can do about this. As you can see above, spark just uses the standard Java and OS APIs to obtain raw metrics data. If it's not accurate, then this is either a problem with your setup or a Java/Docker/OS bug.