The new Subway Public Dashboard is a user-friendly tool that tells our customers how we’re doing by:
Major Incidents are incidents that delay 50 or more trains. Such events cause the most disruption to customers. Major incidents are separated into six categories:
Filtering by Peak or Off-Peak shows incidents that began during the selected time period (regardless of how long the incident lasted).
Service Delivered (sometimes referred to as throughput) measures our ability to deliver the scheduled service. Service Delivered is measured along the busiest part of the line, which reflects service across the entire line, and is reported as the percentage of scheduled trains that are provided during peak hours.
Peak hours are as noted above.
Additional Platform Time (APT) is the estimated average extra time that customers spend waiting on the platform for a train, compared with their scheduled wait time. Similarly, Additional Train Time (ATT) is the estimated average extra time that customers spend onboard a train, compared to the time they would have spent onboard a train if trains were running according to schedule. Journey Time is the average time it takes a customer to complete a trip, from the time they start waiting at the origin platform to the time they alight from a train at their destination. Additional Journey Time (AJT) is the average extra time that a customer’s trip takes, compared to their scheduled trip time, equivalent to the sum of APT and ATT, as well as Journey Time minus the average scheduled trip time. Please note that the Journey Time and Additional Journey Time metrics do not appear in the charts on the dashboard, these metrics only appear in the state-mandated metrics Excel file. All estimates are for each individual train a customer uses in their journey, not all trains in their journey combined---in transit terminology, each “unlinked trip”. Put another way, each “leg” of a subway journey involving transfers is counted separately in calculating these metrics. This allows these metrics to be calculated for each individual line, as well as the whole system. These metrics are reported for trips starting from 6 a.m. to 11 p.m., weekdays.
APT, ATT, AJT, and Journey Time are calculated using a process which takes as input ridership data from MetroCard swipes, OMNY taps, train schedules, and actual train movement data, and then estimates the actual and scheduled journey time for every weekday journey “leg”, or unlinked trip, on the subway system. This process has four main parts:
This process relies on several key assumptions, to overcome data availability and computational feasibility issues:
These simplifying assumptions are, of course, not strictly correct: a person’s next swipe doesn’t always predict their destination, subway ridership does vary somewhat day-to-day, and people do sometimes vary their path for the same journey day to day. However, they serve as valuable approximations of weekday passenger behavior which make the process’s estimates of these metrics close to their “actual” values, at an aggregate level. These assumptions do not hold as well during late nights and weekends, largely due to frequent service changes caused by track work, which is primarily why these metrics are reported only for weekdays and for trips starting between 6 a.m. and 11 p.m. In the era of the COVID-19 pandemic, several of these assumptions are clearly less valid than when the metrics were launched. This, along with other data issues, prompted some days to be excluded from reporting, as well as changes to the sample trip generation process to use more up to date sample days.
At the same time, the rollout of OMNY has improved data quality. While MetroCard data only indicates the 6-minute window a swipe took place within, OMNY provides a precise timestamp. This requires fewer assumptions, as there is no need to distribute swipes across the 6-minute periods used by MetroCards.
As data availability changes (e.g., rollout of new data sources for train locations, continued rollout of OMNY), updates will be made to the way these metrics are generated. Similarly, the methodology may be refined. If warranted, this may prompt revision of historical data to provide a comparable time series. Changes will also likely impact related metrics such as Customer Journey Time Performance, Journey Time, and Additional Journey Time.
Customer Journey Time Performance (CJTP) estimates the percentage of customer trips with a total travel time within 5 minutes of the scheduled time. It is equivalent to the percentage of customer trips with Additional Platform Time (APT) + Additional Train Time (ATT) less than 5 minutes. It is calculated using the same process as is used to calculate APT and ATT. See the description of those metrics for how this process works, as well as relevant assumptions. Like APT and ATT, CJTP is estimated for each individual train a customer uses in their journey, not all trains in their journey combined; in transit terminology, the “unlinked trip” level. It is reported for weekdays for trips starting from 6 a.m. to 11 p.m.
Mean Distance Between Failures (MDBF) reports how frequently car-related problems such as door failures, loss of motor power, or brake issues cause a delay of over five minutes. It is calculated by dividing the number of miles train cars run in service by the number of incidents due to car‐related problems.
MDBF numbers include weekdays and weekends. Due to statistical variations in the monthly surveys, this number is reported as a rolling 12-month average.
Car Class | Lines |
---|---|
R62 | |
R62A | |
R142 | |
R142A | |
R188 | |
R32 | |
R42 | |
R46 | |
R68 | |
R68A | |
R143 | |
R160 | |
R179 |
Subway Car Passenger Environment Survey Key Performance Indicators (PES-KPIs) group the results of surveys of subway car condition into three categories:
Due to statistical variations in the monthly surveys, this number is reported as a rolling 12-month average.
Station Passenger Environment Survey Performance Indicators (PES-KPI) group the results of surveys of station condition into three categories:
Due to statistical variations in the monthly surveys, this number is reported as a rolling 12-month average.
Elevator and Escalator availability is defined as the amount of time that elevators or escalators are operational systemwide. Availability for a given time period is measured by determining the percentage of that time period a unit is available for customer use. All service outages, regardless of cause, count as downtime in availability calculations, except for units out of service for capital rehabilitation (which are excluded).
Most elevators and escalators in the subway are maintained by NYCT and are electronically monitored 24-hours a day. Some elevators and escalators in the subway are owned and maintained by third parties; these are inspected by NYCT personnel every 8 hours. The dashboard numbers include weekdays and weekends.
Wait Assessment (WA) measures how regularly the trains are spaced during peak hours at selected timepoints on each line. To meet the standard, the headway (time between trains) can be no greater than 25% more than the scheduled headway. This provides a percentage of trains passing the standard, but does not account for extra service operated, is not weighted to how many customers are waiting for the trains at different stations, does not distinguish between relatively minor gaps in service and major delays, and is not a true measurement of time customers spend waiting on the platform.
Terminal On-Time Performance (TOTP) measures the percentage of trains arriving at their destination terminals as scheduled. A train is counted as on-time if it arrives at its destination early, on time, or no more than five minutes late, and has not skipped any planned stops. TOTP is a legacy metric that provides a measure of trains arriving within the standard, and not a direct measure of customer travel time, particularly since relatively few customers travel all the way to the end of a line.