Statistics Metrics
Overview
This page provides detailed explanations of all metrics available in the Statistics view. These metrics represent aggregated and calculated values from historical data, which may differ from real-time monitoring values in calculation methods.
Statistics is organized into three main sections:
- Project: Metrics aggregated at the project level, useful for project administrators
- NetFUNNEL Server Instance: Metrics at the NetFUNNEL server instance level, useful for server administrators
- Segment: Metrics at the segment level, useful for segment administrators
Metric Summary Table
The following table provides a quick reference for all statistics metrics. Each metric is explained in detail in the sections below.
| Metric | Unit | Description | Real-time Equivalent |
|---|---|---|---|
| Project & Segment Metrics | |||
| All Requests | TPS | Average TPS of all traffic control API calls | N/A |
| Inflow | TPS | Average TPS of initial entry requests | Entry Requests |
| Processing Time | sec | Average time from entry to key return | Process Time |
| Wait Time | sec | Maximum wait time during period | Wait Time (average) |
| Queue | pax | Cumulative queue size during period | Queue Size (current) |
| Limited Inflow | - | Configured capacity limit during period | Limited Inflow |
| Users | pax | Snapshot of active users at specific moment | Active Users (current) |
| Outflow Rate (%) | % | Completion rate of explicit exits | Outflow Rate |
| Bypass | TPS | Rate of bypassed requests | N/A |
| Block | TPS | Rate of blocked requests | N/A |
| NetFUNNEL Metrics | |||
| CPU Occupancy (%) | % | CPU usage at measurement moment | N/A |
| All Requests | TPS | Average TPS of all API requests (including admin APIs) | N/A |
| Session | - | Count of completed sessions (from entry to completion) | N/A |
| Block | TPS | Average rate of requests blocked by the repeated request block feature | N/A |
Global Metric Rules
Before diving into individual metrics, it's important to understand how statistics are calculated:
Rule 1: 1-Minute Aggregation
All metrics are aggregated every minute with new values. This means:
- Data is collected continuously
- Every minute, a new aggregated value is calculated
- Historical data shows these minute-by-minute values
Example:
00:00:00 - 00:01:00: Processing Time = 2.5 seconds
00:01:00 - 00:02:00: Processing Time = 3.1 seconds
00:02:00 - 00:03:00: Processing Time = 2.8 seconds
Rule 2: Project-Level Aggregation
At the project level, values are the sum of all segments' metrics. This means:
- If you have 3 segments, the project-level metric is the sum of all 3 segments
- Useful for understanding total project activity
Example:
Project has 3 segments:
- Segment A: Inflow = 10 TPS
- Segment B: Inflow = 15 TPS
- Segment C: Inflow = 5 TPS
=> Project Inflow = 30 TPS (10 + 15 + 5)
Project and Segment Metrics
These metrics are available at both the project and segment levels. Project-level metrics are the sum of all segment metrics.
All Requests
Unit: TPS (Transactions Per Second)
What it measures: ⚠️ Important: This is the Project/Segment level All Requests metric. It measures only traffic control API calls made by NetFUNNEL agents.
The average rate at which all traffic control API calls are made per second. Think of this as the "total traffic control communication volume" between your application and NetFUNNEL servers.
Why it matters: This metric helps you understand the overall traffic control communication load. It shows how busy your traffic control system is, regardless of what type of request it is.
What's included (Traffic Control API Calls Only): The NetFUNNEL agent makes four types of traffic control calls to the NetFUNNEL server:
- Initial Entry Request: First-time key issuance request (user trying to enter)
- Re-entry Request: Entry request from waiting room (user retrying after waiting)
- Alive Notice Request: Active status notification during Section Control (keeping session alive)
- Complete Request: Key return request (user finished, returning the key)
What's NOT included:
- Data query API requests (for statistics, monitoring dashboards)
- Administrative API requests
- Other non-traffic-control API requests
Real-world example:
During 1 minute:
- 100 Initial Entry Requests
- 50 Re-entry Requests
- 200 Alive Notice Requests
- 100 Complete Requests
Total: 450 traffic control requests in 60 seconds = 7.5 TPS
Important note: This metric is not visible in real-time monitoring. It's only available in Statistics, making it useful for historical analysis of traffic control communication.
Difference from NetFUNNEL Level All Requests:
- This metric (Project/Segment): Only traffic control API calls (4 types listed above)
- NetFUNNEL Level All Requests: ALL API requests including data queries, admin APIs, etc.
When to use:
- Understanding traffic control communication load
- Capacity planning for traffic control operations
- Identifying unusual traffic control patterns
Inflow
Unit: TPS (Transactions Per Second)
What it measures: The average rate of initial key issuance requests per second. This shows how many new users are trying to enter your service each second.
Important: Different from Real-time Monitoring Inflow
⚠️ Don't confuse this with "Inflow" in real-time monitoring! While both use the name "Inflow," they measure different things:
| Aspect | Statistics Inflow | Real-time Monitoring Inflow |
|---|---|---|
| What it measures | Initial entry requests (demand) | Requests actually entering service (actual load) |
| Corresponds to | Entry Requests in real-time monitoring | Inflow in real-time monitoring |
| Includes | First-time entry attempts only | All requests that received PASS (direct entry + re-entry from waiting room) |
| Meaning | How many users want to enter | How many users actually entered |
Statistics Inflow means:
- Shows demand: How many users want to enter
- Measures attempts: First-time entry requests
- Some may go to waiting room, some may enter directly
- Historical average: Not current value
Real-time Monitoring Inflow means:
- Shows actual load: Requests actually entering your service
- Measures successful entries: Requests that received PASS
- Includes both direct entries and re-entries from waiting room
- Current value: Shows what's happening right now
Real-world example:
During 1 minute (60 seconds):
- Second 1-10: 5 initial entry requests
- Second 11-20: 8 initial entry requests
- Second 21-30: 12 initial entry requests
- Second 31-40: 10 initial entry requests
- Second 41-50: 7 initial entry requests
- Second 51-60: 9 initial entry requests
Total: 51 initial entry requests in 60 seconds
Statistics Inflow = 51 ÷ 60 = 0.85 TPS
Note: Not all of these 51 requests entered immediately.
Some went to waiting room, some entered directly.
The actual service load would be shown by real-time monitoring Inflow.
Relationship to Real-time Monitoring:
- Statistics Inflow corresponds to Entry Requests in real-time monitoring (demand side)
- Real-time Monitoring Inflow corresponds to Inflow in real-time monitoring (actual load side)
Statistics shows historical average values rather than current values.
When to use:
- Understanding historical demand patterns
- Comparing demand across different time periods
- Planning capacity based on past demand
Processing Time
Unit: sec (seconds)
What it measures: The average time users spend actively using your service, from when they enter (receive PASS) until they return their key. This represents actual service usage duration.
Why it matters: Processing Time tells you how long users are actually using your service. Longer times might mean:
- Your service is doing more work
- Server is under load (slower processing)
- Users are spending more time on your service
How it's calculated: The system calculates the average of all processing times during a 1-minute period.
Calculation example:
During 1 minute (00:00:00 ~ 00:01:00):
- User A: Entered at 00:00:10, returned key at 00:00:11 → 1 second
- User B: Entered at 00:00:15, returned key at 00:00:17 → 2 seconds
- User C: Entered at 00:00:20, returned key at 00:00:26 → 6 seconds
- User D: Entered at 00:00:30, returned key at 00:00:32 → 2 seconds
- User E: Entered at 00:00:45, returned key at 00:00:46 → 1 second
Average Processing Time = (1 + 2 + 6 + 2 + 1) ÷ 5 = 2.4 seconds
Real-world scenarios:
Scenario 1: Fast service (e.g., simple API call)
Processing Time: 0.5 - 1.5 seconds
→ Service responds quickly, users complete quickly
Scenario 2: Moderate service (e.g., page load)
Processing Time: 2 - 5 seconds
→ Normal page loading time
Scenario 3: Slow service (e.g., heavy computation)
Processing Time: 10+ seconds
→ Service may be under load, or doing complex operations
What affects Processing Time:
- Environment characteristics: Server performance, network speed
- Service type: Simple API vs complex page vs heavy computation
- Integration implementation: How you call
nfStart()andnfStop()
When to use:
- Understanding typical service usage duration
- Identifying performance degradation over time
- Comparing processing times across different periods
Wait Time
Unit: sec (seconds)
What it measures: The longest wait time experienced by any user during a 1-minute period. This shows the worst-case user experience - how long the unluckiest user had to wait.
Why it matters: While average wait time tells you typical experience, maximum wait time tells you the worst experience. This helps you understand:
- Peak period user experience
- Whether some users are waiting too long
- If capacity adjustments are needed
How it's calculated: The system finds the single longest wait time among all users who waited during that minute.
Calculation example:
During 1 minute (00:00:00 ~ 00:01:00):
- User A: Waited from 00:00:10 to 00:00:11 → 1 second wait
- User B: Waited from 00:00:15 to 00:00:17 → 2 seconds wait
- User C: Waited from 00:00:20 to 00:00:26 → 6 seconds wait (longest!)
- User D: Waited from 00:00:30 to 00:00:32 → 2 seconds wait
Wait Time = 6 seconds (the maximum)
Real-world scenarios:
Scenario 1: Low wait time (good)
Wait Time: 1-3 seconds
→ Users don't wait long, good user experience
Scenario 2: Moderate wait time (acceptable)
Wait Time: 5-10 seconds
→ Some waiting, but acceptable for most users
Scenario 3: High wait time (needs attention)
Wait Time: 30+ seconds
→ Users waiting too long, consider increasing Limited Inflow
Operational insight: High wait times in statistics may indicate periods when Limited Inflow was set too low relative to demand. If you see consistently high wait times, you might want to:
- Increase Limited Inflow (if server capacity allows)
- Review demand patterns to better plan capacity
Relationship to Real-time Monitoring: This is similar to Wait Time in real-time monitoring, but statistics shows historical maximum values rather than current averages.
When to use:
- Understanding worst-case user experience
- Identifying peak periods with long waits
- Planning capacity adjustments
Queue
Unit: pax (passengers/users)
What it measures: The cumulative number of users who entered the waiting room during a 1-minute period. This shows total waiting demand, not the current queue size.
Important distinction:
- Statistics Queue: Cumulative count of all users who waited during the period
- Real-time Queue: Current number of users waiting right now
Why it matters: This metric helps you understand:
- How many users experienced waiting
- Total waiting demand during the period
- Whether waiting is a common experience
How it's calculated: The system counts every user who enters the waiting room during the 1-minute period.
Calculation example:
During 1 minute (00:00:00 ~ 00:01:00):
- 00:00:05: User A enters queue
- 00:00:12: User B enters queue
- 00:00:18: User C enters queue
- 00:00:25: User D enters queue
- 00:00:35: User E enters queue
- 00:00:48: User F enters queue
Queue = 6 users (cumulative count)
Real-world scenarios:
Scenario 1: No waiting
Queue: 0 users
→ All users entered immediately, no waiting room needed
Scenario 2: Light waiting
Queue: 10-50 users per minute
→ Some users wait, but not many
Scenario 3: Heavy waiting
Queue: 100+ users per minute
→ Many users experiencing waiting, high demand
Relationship to Real-time Monitoring: This is similar to Queue Size in real-time monitoring, but statistics shows historical cumulative values rather than current values.
When to use:
- Understanding waiting demand patterns
- Comparing waiting across different time periods
- Identifying peak waiting periods
Limited Inflow
Unit: - (dimensionless count, just a number)
What it measures: The Limited Inflow value that was configured in the administrator console during the 1-minute period. This represents the maximum capacity allowed for entry at that time.
Why it matters: Limited Inflow is your "capacity gate" - it controls how many users can be active at once. By comparing this with actual usage (Users metric), you can see:
- If capacity was set appropriately
- If you had unused capacity
- If capacity was too low (causing queues)
Real-world example:
During 1 minute:
- Limited Inflow: 100 users
- Users (actual): 95 users
→ 5 users of capacity unused (5% headroom)
During another minute:
- Limited Inflow: 100 users
- Users (actual): 100 users
→ Capacity fully utilized (0% headroom)
During another minute:
- Limited Inflow: 100 users
- Users (actual): 100 users
- Queue: 50 users
→ Capacity full, users waiting (may need to increase)
How to interpret:
- Limited Inflow > Users: You have unused capacity
- Limited Inflow = Users: Capacity fully utilized
- Limited Inflow < Demand: Users will queue (check Queue metric)
Relationship to Real-time Monitoring: This corresponds to the Limited Inflow setting used in real-time monitoring.
When to use:
- Reviewing historical capacity settings
- Understanding why queues formed (if Limited Inflow was too low)
- Planning future capacity settings based on past patterns
Users
Unit: pax (passengers/users)
What it measures: A snapshot of active users at a specific moment. Active users are those who have received keys but haven't returned them yet - they're currently using your service.
Important: This is a snapshot, not an average!
- Shows the count at a specific moment (e.g., 00:00:01)
- Not a cumulative value (doesn't add up over time)
- Not an average (not the mean of multiple measurements)
Why it matters: This tells you how many users were actively using your service at that moment. It's like taking a photo of a room and counting how many people are in it.
Real-world example:
At 00:00:01 (snapshot moment):
- User A: Has key, using service
- User B: Has key, using service
- User C: Has key, using service
- User D: Has key, using service
- User E: Has key, using service
Users = 5 users (snapshot count)
At 00:01:01 (next snapshot):
- User A: Returned key (no longer active)
- User B: Still has key
- User C: Still has key
- User D: Returned key (no longer active)
- User E: Still has key
- User F: Just received key (newly active)
Users = 4 users (new snapshot)
Comparison with Limited Inflow:
Limited Inflow: 100 users
Users (snapshot): 95 users
→ 5 users of capacity available
Limited Inflow: 100 users
Users (snapshot): 100 users
→ Capacity full, new users will queue
Relationship to Real-time Monitoring: This is similar to Active Users in real-time monitoring, but statistics shows historical snapshots rather than current values.
When to use:
- Understanding concurrent usage at specific moments
- Comparing actual usage with capacity limits
- Identifying peak usage moments
Outflow Rate (%)
Unit: % (percentage)
What it measures: The percentage of users who entered your service and explicitly returned their keys (completed properly). This shows integration health - are users properly finishing their sessions?
Why it matters: A high Outflow Rate means users are properly completing their sessions. A low rate might indicate:
- Missing
nfStop()calls in your code - Integration issues
- Users abandoning sessions
How it's calculated: The system calculates: (Users who explicitly returned keys) ÷ (Users who entered) × 100
Calculation example:
During 1 minute (00:00:00 ~ 00:01:00):
Users who entered: 10 users
- User A: Entered and returned key ✅
- User B: Entered and returned key ✅
- User C: Entered and returned key ✅
- User D: Entered but didn't return key (timeout) ❌
- User E: Entered and returned key ✅
- User F: Entered and returned key ✅
- User G: Entered but didn't return key (timeout) ❌
- User H: Entered and returned key ✅
- User I: Entered and returned key ✅
- User J: Entered and returned key ✅
Users who explicitly returned keys: 8 users
Outflow Rate = (8 ÷ 10) × 100 = 80%
Real-world scenarios:
Scenario 1: Excellent integration (good)
Outflow Rate: 90-100%
→ Almost all users properly complete sessions
→ Integration is working well
Scenario 2: Good integration (acceptable)
Outflow Rate: 70-89%
→ Most users complete properly
→ Some may be timing out, but acceptable
Scenario 3: Poor integration (needs attention)
Outflow Rate: <70%
→ Many users not completing properly
→ Likely missing nfStop() calls or integration issues
→ Should investigate and fix
What to do if Outflow Rate is low:
- Immediate: Reduce Timeout values to free up capacity quickly
- Investigation: Check if
nfStop()is being called properly - Long-term: Fix integration issues causing missing key returns
Relationship to Real-time Monitoring: This corresponds to Outflow Rate in real-time monitoring, but statistics provides historical averages.
When to use:
- Monitoring integration health over time
- Identifying periods with integration issues
- Comparing completion rates across different periods
Bypass
Unit: TPS (Transactions Per Second)
What it measures: The average rate of requests that received BYPASS responses per second. These are requests that completely bypassed the NetFUNNEL waiting room.
Why it happens: Normally, when users try to enter, NetFUNNEL responds with:
- WAIT: Go to waiting room
- PASS: Enter immediately
But when a segment or project is deactivated, NetFUNNEL sends:
- BYPASS: Bypass the waiting room entirely (as if NetFUNNEL isn't active)
Real-world example:
Scenario: Segment is deactivated
- User A sends Initial Entry Request → Receives BYPASS
- User B sends Initial Entry Request → Receives BYPASS
- User C sends Re-entry Request → Receives BYPASS
During 1 minute: 30 BYPASS responses
Bypass = 30 ÷ 60 = 0.5 TPS
When you'll see Bypass:
- Segment is deactivated
- Project is deactivated
Interpretation:
- Bypass > 0: Some requests bypassed waiting room (segment/project was deactivated)
- Bypass = 0: All requests went through normal NetFUNNEL flow
When to use:
- Understanding when segments were deactivated
- Reviewing maintenance/testing periods
- Confirming traffic control was active
Block
Unit: TPS (Transactions Per Second)
What it measures: The average rate of requests that received BLOCK responses per second. These are requests that were blocked from entering.
Why it happens: Normally, when users try to enter, NetFUNNEL responds with:
- WAIT: Go to waiting room
- PASS: Enter immediately
But requests are blocked (receive BLOCK) when:
- Segment Block mode: Segment is set to Block mode (intentional blocking)
Note: Requests blocked by the Repeated Request Block feature (which return 302 status code) are NOT counted in this Block metric. Repeated Request Block statistics are tracked separately at the NetFUNNEL server instance level.
Real-world example:
Scenario: Segment set to Block mode
- User A sends Initial Entry Request → Receives BLOCK (segment in Block mode)
- User B sends Initial Entry Request → Receives BLOCK (segment in Block mode)
- User C sends Initial Entry Request → Receives BLOCK (segment in Block mode)
- User D sends Initial Entry Request → Receives BLOCK (segment in Block mode)
During 1 minute: 20 BLOCK responses
Block = 20 ÷ 60 = 0.33 TPS
Note: If Repeated Request Block was triggered (302 responses), those would NOT be counted here.
When you'll see Block:
- Segment intentionally set to Block mode
- Anti-bot protection triggered
- Anti-abuse protection triggered
- Suspicious request patterns detected
Important: This metric does NOT include requests blocked by Repeated Request Block (302 responses). For Repeated Request Block statistics, see the NetFUNNEL server instance level Block metric.
Interpretation:
- Block > 0: Some requests were blocked (security/abuse prevention working)
- Block = 0: No requests were blocked
When to use:
- Understanding security/abuse prevention effectiveness
- Reviewing blocked request patterns
- Confirming blocking features are working
NetFUNNEL Server Instance Level Metrics
These metrics are viewed at the NetFUNNEL server instance level. They are maintenance metrics for the NetFUNNEL server itself.
If you're using NetFUNNEL as a managed service, you typically don't need to monitor these metrics. They're useful for NetFUNNEL server administrators or engineers managing server installations.
CPU Occupancy (%)
Unit: % (percentage)
What it measures: CPU usage of the NetFUNNEL server at the moment of measurement, expressed as a percentage (0-100%).
Why it matters: High CPU usage might indicate:
- Server is under heavy load
- Performance issues
- Need for server scaling
Real-world example:
At 00:00:01: CPU Occupancy = 45%
→ Server using 45% of CPU capacity
→ 55% capacity available
At 00:01:01: CPU Occupancy = 85%
→ Server using 85% of CPU capacity
→ 15% capacity available (getting high)
When to use:
- Monitoring server resource utilization
- Identifying performance bottlenecks
- Planning server capacity
All Requests (NetFUNNEL Level)
Unit: TPS (Transactions Per Second)
What it measures: ⚠️ Critical Distinction: This is completely different from the Project/Segment "All Requests" metric!
This metric includes ALL types of API requests to the NetFUNNEL server, not just traffic control requests. It's a comprehensive view of all server activity.
What's included (ALL API Requests):
- Traffic control API requests (same 4 types as Project/Segment level):
- Initial Entry Request
- Re-entry Request
- Alive Notice Request
- Complete Request
- Data query API requests: Requests for statistics data, monitoring dashboards, reports
- Administrative API requests: Server management, configuration, admin operations
- Any other API requests: All other types of API calls to the NetFUNNEL server
Key Difference:
| Aspect | Project/Segment All Requests | NetFUNNEL Level All Requests |
|---|---|---|
| Scope | Traffic control API calls only | ALL API requests |
| Includes | 4 types of traffic control calls | Traffic control + Data queries + Admin + Others |
| Purpose | Understand traffic control load | Understand total server load |
| Use Case | Traffic control capacity planning | Server capacity planning |
Real-world example:
During 1 minute:
Traffic control requests (same as Project/Segment level):
- 100 Initial Entry Requests
- 50 Re-entry Requests
- 200 Alive Notice Requests
- 100 Complete Requests
Subtotal: 450 requests
Additional requests (NOT in Project/Segment level):
- 50 Statistics query requests (dashboard refreshes)
- 10 Admin API requests (configuration changes)
- 5 Other API requests
Total: 515 requests in 60 seconds
All Requests = 515 ÷ 60 = 8.58 TPS
Why this matters:
- Project/Segment All Requests: Shows your application's traffic control activity
- NetFUNNEL Level All Requests: Shows total server load including admin operations, dashboard queries, etc.
When to use:
- Understanding total server load (all API types combined)
- Capacity planning for NetFUNNEL server infrastructure
- Identifying unusual server activity (including admin operations)
- Server resource planning and scaling decisions
Session
Unit: - (dimensionless count, just a number)
What it measures: The count of completed sessions on the NetFUNNEL server. A session represents one complete user journey from entry (initial request) to completion (key return) on the server.
Why it matters: This metric helps you understand:
- Total number of user sessions processed by the server
- Server workload in terms of completed user interactions
- Overall server activity level
How it's calculated: The system counts each session that completes (from entry to key return) during the measurement period. In statistics, this value is aggregated based on the selected time period:
- Day view: Sum of sessions per minute
- Month view: Sum of sessions per hour
- Year view: Sum of sessions per day
Real-world example:
During 1 minute (00:00:00 ~ 00:01:00):
- User A: Entered at 00:00:10, completed at 00:00:15 → 1 session
- User B: Entered at 00:00:20, completed at 00:00:25 → 1 session
- User C: Entered at 00:00:30, completed at 00:00:35 → 1 session
Session = 3 sessions (total count)
When to use:
- Understanding total server activity in terms of completed sessions
- Comparing server workload across different time periods
- Planning server capacity based on session volume
Block (NetFUNNEL Level)
Unit: TPS (Transactions Per Second)
What it measures: ⚠️ Important: This is the NetFUNNEL server instance level Block metric, which is different from the Project/Segment level Block metric.
The average rate of requests blocked per second by the repeated request block feature at the NetFUNNEL server level. This metric counts requests that were blocked by the repeated request block feature (e.g., blocking clients that exceed request limits within a time period, or permanently blocked clients).
Why it matters: This metric helps you understand:
- How many requests are being blocked by the repeated request block feature at the server level
- Effectiveness of server-level abuse prevention
- Server load from blocked requests
How it's calculated: The system counts requests blocked by the repeated request block feature during the measurement period and calculates the average rate per second. In statistics, this value is aggregated based on the selected time period:
- Day view: Sum of blocked requests per minute, then averaged to TPS
- Month view: Sum of blocked requests per hour, then averaged to TPS
- Year view: Sum of blocked requests per day, then averaged to TPS
Real-world example:
During 1 minute (00:00:00 ~ 00:01:00):
- Request A: Blocked (repeated request block) → counted
- Request B: Blocked (repeated request block) → counted
- Request C: Allowed (normal) → not counted
- Request D: Blocked (repeated request block) → counted
Total: 3 blocked requests in 60 seconds
Block = 3 ÷ 60 = 0.05 TPS
Difference from Project/Segment Level Block:
- This metric (NetFUNNEL Level): Blocks due to repeated request block feature at server level
- Project/Segment Level Block: Blocks due to segment Block mode or repeated request blocking
When to use:
- Monitoring server-level repeated request block feature effectiveness
- Understanding server abuse prevention activity
- Reviewing blocked request patterns at the server infrastructure level
Metric Reference
For detailed explanations of what each metric represents in the context of traffic flow, refer to Metrics Quick Reference. The concepts are the same, but statistics provides historical aggregated values rather than real-time snapshots.