Shopping Cart

No products in the cart.

BS EN 14534:2016:2017 Edition

$215.11

Postal services. Quality of service. Measurement of the transit time of end-to-end services for bulk mail

Published By Publication Date Number of Pages
BSI 2017 130
Guaranteed Safe Checkout
Category:

If you have any questions, feel free to reach out to our online customer service team by clicking on the bottom right corner. Weā€™re here to assist you 24/7.
Email:[email protected]

This European Standard specifies methods for measuring the end-to-end transit-time of domestic and cross-border bulk mail, collected, processed and delivered by postal service operators. It considers methods using representative end-to-end samples for all types of bulk-mail services with defined transit-time service-levels as offered to the postal customer. It specifies a set of minimum requirements for the design of a quality-of-service measurement system for bulk mail, involving the selection and distribution of test mail sent by business senders and received by selected panellists. This European Standard is applicable to the measurement of end-to-end priority and non-priority bulk-mail services. For the purpose of this standard, bulk mail services can include all types of addressed bulk mail including, but not limited to letter mail, direct mail, magazines and newspapers and encombrant-format mailings. This European Standard relates to the measurement of bulk-mail services offered to businesses that have pick-ups at their offices or give their mail to postal service operators. If a third party agent acts for the postal operator, then the time the mail is handed over to the agent will form part of the measurement. Where a third party agent acts for the sending customer, the measurement will be from the point when mail is handed over to the postal operator. This European Standard is of modular structure. It is designed to assess the service performance of postal operators for bulk mail services on the level of a single bulk mailing as defined by the postal customer or any aggregations thereof, including the performance of an individual customer / operator or the performance of a group of customers / operators or the performance at national level. The standardized QoS measurement-method provides a uniform way for measuring the end-to-end transit time of postal items. Using a standardized measurement-method will ensure that the measurement will be done in an objective and equal way for all operators in accordance with the requirements of the Directive 97/67/EC and its amendments. The end-to-end service measured may be provided by one operator or by a group of operators working either together in the same distribution chain or parallel in different distribution chains. The method for end-to-end measurement specified in this European Standard is not designed to provide results for the measurement of parts of the distribution chain. This standard does not include other service performance indicators than those related to end-to-end transit time. In particular, this standard does not measure whether the timings of collections meet customersā€™ requirements. The transit-time quality-of-service result will be expressed as percentage of mail delivered by, on or between expected dates. These dates can be defined absolute as calendar-days or relative to the date of induction. The transit time calculation rule will be in whole days. This quality of service indicator does not measure the postal operatorā€™s overall performance in a way, which provides direct comparison of postal service operators. This European Standard nevertheless provides minimum requirements for the comparability of end-to-end transit-time measurement results of specific bulk mailings. This European Standard is not applicable for the measurement of end-to-end transit-times of single-piece mail services and hybrid mail, which require different measurement systems and methodologies (see, for example, EN 13850, Postal Services – Quality of Services – Measurement of the transit time of end-to-end services for single piece priority mail and first class mail. (…)

PDF Catalog

PDF Pages PDF Title
7 European foreword
8 Introduction
10 1 Scope
11 2 Normative references
3 Terms and definitions
20 4 Symbols and abbreviations
21 5 Transit time as a Quality-of-Service indicator
5.1 General
5.2 Transit time calculation
5.2.1 Measurement unit
5.2.2 Establishing the date of induction
23 Table 1 ā€” One and Multi-Day Inductions
5.2.3 Calculation of the transit time
24 6 Methodology
6.1 Representative sample design
6.2 Minimum Sample Size (MSS)
6.3 The design basis
6.3.1 General
25 6.3.2 Choice of the design basis
6.3.3 Evaluation of the design basis
6.3.3.1 Real mail evaluations
6.3.3.2 Logistic / management data
6.4 Discriminant Mail Characteristics (DMC)
6.4.1 General
26 6.4.2 DMC in aggregated fields of study
27 6.4.3 Geographical stratification
6.5 Geographical distribution of the receiver panel
28 Table 2ā€” Minimal number of postal areas to be covered
6.6 Creation of test mail
6.6.1 General
29 6.6.2 Logistic structure of a bulk mailing
6.6.3 Separate production and manual inclusion methods
6.6.3.1 General
6.6.3.2 Test item production by the performance monitoring organization
30 6.6.3.3 Test item production by the sender
6.6.3.4 Test item inclusion
6.6.4 Address seeding methods
31 6.7 Documentation of date and time of posting
6.8 Integrity of the measurement
33 7 Report
7.1 Measurement results
7.2 Service Performance Indicators
7.2.1 Available types of indicators
7.2.2 Accuracy
34 7.3 Weighting of the results
7.3.1 Reasons for implementing a weighting system
7.3.1.1 Weighting according to the sample design
7.3.1.2 Weighting due to non-response and invalid test items
7.3.2 Weighting caps
7.3.2.1 General
35 7.3.2.2 Weighting caps for each discriminant characteristic
7.3.2.3 Weighting caps for each individual item
7.4 Content
36 8 Quality control
37 9 The Annexes
38 Annex A (normative) Accuracy calculation
A.1 Scope
A.1.1 General
A.1.2 Two stage sampling approach
A.1.3 Covariance / Stratification / Accuracy calculation
39 A.1.4 The design factor
A.1.5 Single mailing versus continuous measurement
A.2 Symbols
40 A.3 Variance calculation for one stratum
A.3.1 General calculation method ā€“ Single Mailing and Induction Point Field of Study
A.3.2 General calculation method ā€“ Aggregated Mailing / Induction Point Field of Study
A.3.2.1 The Calculation method
41 A.3.2.2 Relation-to-total variation
42 A.3.2.3 Intra-relation variation
A.4 Variance calculation for a stratified sample
A.4.1 Variance of a weighted sample design
43 A.4.2 Final weight of the individual item
A.4.3 Weighting basis
44 A.4.4 Combination of weighting and covariance
A.5 Calculation of the confidence interval
A.5.1 General
45 A.5.2 Normal approximation
A.5.2.1 The Normal confidence interval
A.5.2.2 Applicability of the Normal confidence interval
46 Table A.1 ā€” Minimum number of non-performance items for the use of the Normal distribution
A.5.3 Agresti-Coull approximation
47 A.5.4 Inverse Beta approximation
48 Annex B (normative) Transit Time Calculation
B.1 Basic Principles
B.2 The date of induction
B.2.1 Determination
49 B.2.2 Examples
B.3 The transit-time calculation rules
B.3.1 Determination
50 B.3.2 Examples
B.3.2.1 EXAMPLE: Five day working week with extra collection on Saturdays
Table B.1 ā€” Collection Monday-Saturday / Delivery Monday-Friday
B.3.2.2 EXAMPLE: Fixed date of induction
51 Table B.2 ā€” Collection in Week 1, Induction on Monday, delivery in Week 2 Thursday to Friday
52 Annex C (normative) Comparability of Measurement Results
C.1 General
C.1.1 Comparing dimensions
53 C.1.2 Preconditions for comparison
C.1.3 Suggestions for comparison methods
54 C.2 Same service provider ā€“ Different measurement periods
C.2.1 Scope
C.2.2 Minimum Requirements
55 C.3 Different service providers ā€“ Same measurement period
C.3.1 Scope
C.3.2 Minimum Requirements
56 C.4 Cases of limited comparability
58 Annex D (normative) Design of aggregated Fields of Study
D.1 General
D.2 Possible types of aggregation
D.2.1 Multi operator bulk mailing
D.2.2 Bulk mail campaign
D.2.3 Bulk mail customer
59 D.2.4 Bulk mail service provider
D.2.5 Bulk mail service
D.2.6 Group of customers
D.2.7 Group of providers
60 D.2.8 Induction regions
D.2.9 Universal service on national level
D.3 Design requirements
D.3.1 General
D.3.2 Minimum sample size
D.3.3 Design basis
61 D.3.4 Discriminant mail characteristics
D.4 Reporting
62 Annex E (normative) Additional Requirements for continuous Fields of Study [CMS/SCMS]
E.1 Scope
E.2 Methodology
E.2.1 Measurement period
63 E.2.2 Minimum Sample Size (MSS)
E.2.2.1 Domestic measurement systems ā€“ priority mail
Table E.1 ā€” Minimum Sample Sizes for selected performance levels (Domestic)
E.2.2.2 Domestic measurement systems ā€“ non-priority mail
Table E.2 ā€” Minimum Sample Sizes for selected performance levels (Cross-Border)
64 E.2.2.3 Cross-border measurement systems
Table E.3 ā€” Minimum Sample Sizes for selected performance levels (Cross-Border)
E.2.3 The Design Basis
65 E.2.4 Discriminant Mail Characteristics
E.2.4.1 Determination of the discriminant mail characteristics
E.2.4.2 Geographical Stratification
E.2.5 Geographical distribution of the receiver panel
Table E.4 ā€” Minimum number of postal areas to be covered in panels up to 90 panellists
66 E.2.6 Distribution of the business sender panel
Table E.5 ā€” Minimum percentage of study domains to be covered
67 E.3 Report
E.3.1 Panel turnover in relation to accuracy
E.3.2 Weighting
E.3.3 Content and timing
68 E.4 Quality Control
E.4.1 General
E.4.2 Statistical design
E.4.3 Address seeding
E.4.4 Test mail production
E.4.5 Sending test items
69 E.4.6 Receiving test items
E.4.7 Data collection
E.4.8 Data analysis and reporting
E.5 Audit
70 Annex F (normative) Quality control
F.1 Statistical design
F.2 Address seeding
F.2.1 Test item production
F.2.2 Provision of receiver address-information to the bulk-mail customer
F.3 Test mail production
F.3.1 Test item production
71 F.3.2 Provision of test items to the bulk-mail customer
F.4 Sending test items
F.5 Receiving test items
F.6 Data collection
F.7 Data analysis and reporting
72 F.8 Archiving
F.9 Quality control and Information Technology (IT)
73 Annex G (normative) Auditing
G.1 General
G.2 Audit of the design basis
G.2.1 General
G.2.2 Methodological audit
74 G.2.3 Results
G.3 Audit of the Quality-of-Service measurement system
G.3.1 Independence
G.3.2 Panel audit
G.3.3 Stability of the parameters
G.3.4 Instructions given to the panellists
G.3.5 General Audit of the system
75 Annex H (informative) Purpose of postal Quality of Service standards
H.1 General
H.2 Benefits of QoS standards
76 H.3 Application by potential users of EN 14534
H.3.1 Postal Operators
77 H.3.2 National Regulators
78 H.3.3 Bulk mail customers
H.4 Detailed analysis
H.5 Other / broader concepts
H.5.1 General
H.5.2 Technical registrations
80 Annex I (informative) Considerations before implementing EN 14534
I.1 Limitations of EN 14534
I.2 Design of the measurement system
I.2.1 Design parameters
82 I.2.2 Field of study
I.2.2.1 Domestic services (single induction)
I.2.2.2 Domestic services (aggregated)
I.2.2.3 Cross border services
I.2.3 Coverage of existing bulk mail customers
83 I.2.4 Geographical coverage of the receiver panel
84 I.3 Measurement organization
I.3.1 Role of the contractor
I.3.2 Independence
85 I.3.3 Tender process
86 Annex J (informative) Design basis
J.1 Discriminant characteristics
J.1.1 Representativeness in a postal end-to-end network
87 J.1.2 Formats and weights
J.1.3 Additional mail characteristics
J.2 Studies for the evaluation of possible candidates
J.2.1 Type and extent of the evaluation
88 J.2.2 A quick-check of significance
Table J.1 ā€” Quick-Check for Significance
89 J.3 Connection between Design Basis and Sample Design
90 J.4 Design basis
J.4.1 Real mail studies for domestic mail
J.4.1.1 General
91 Table J.2 ā€” Possible real mail studies for exemplary Discriminant Mail Characteristics
J.4.1.2 Documentation
92 J.4.1.3 Adequate representativeness
J.4.2 Real mail studies for cross border mail
J.4.3 Alternative design bases
J.4.3.1 General
93 J.4.3.2 Alternative design bases: Proxies for existing real mail flows
J.4.3.3 Requirements for the reporting
J.5 Frequency of update [CMS/SCMS]
94 Annex K (informative) Implementing EN 14534
K.1 Stages of the survey
K.1.1 Set-up and pilot
K.1.1.1 Preparation
K.1.1.2 Set-up
K.1.1.3 Pilot (testing phase)
K.1.1.4 Faster implementation
95 K.1.2 Measurement period
K.1.2.1 Basic case
K.1.2.2 Continuous measurement systems
K.2 Representativeness
K.2.1 Business Senders
96 K.2.2 Receiver Panellists
K.3 Risk of panellist identification
K.4 Induction and delivery
K.4.1 Induction and last collection
97 K.4.2 Delivery and correct addressing
98 K.4.3 P.O. boxes and pick-up times
K.5 Panel turnover
99 K.6 Validation and transit time calculation
K.6.1 Data validation
K.6.1.1 General
K.6.1.2 Item-based validation
100 K.6.1.3 Panellist based validation
101 K.6.2 Service standard
K.6.3 Transit-time calculation rule
102 K.6.4 Loss
103 K.7 Weighting
K.7.1 Weighting and stratification
K.7.1.1 General
K.7.1.2 Real mail distribution and Real Mail Weights (RMW)
104 K.7.1.3 Weighting Basis (WB) and Calculated Mode Weights (CMW)
K.7.1.4 Individual Final Weight (IFW)
105 K.7.1.5 Alternate formulation: Corrective factors
106 K.7.2 Illustrative example
Table K.1 ā€” RMW corresponding to the modes of the geographical characteristic
Table K.2 ā€” RMW corresponding to the modes of the discriminant characteristic MC
Table K.3 ā€” Number of valid items per stratum
Table K.4 ā€” Standard Weighting Basis
Table K.5 ā€” Alternative Weighting Basis
107 Table K.6 ā€” Individual Final Weights (IFW) for the standard weighting basis in each stratum
Table K.7 ā€” Individual Final Weights (IFW) for the alternative weighting basis in each stratum
Table K.8 ā€” Sampling proportions per stratum
Table K.9 ā€” Corrective factors at the stratum level for the standard weighting basis
108 Table K.10 ā€” Corrective factors at the stratum level for the alternative weighting basis
K.7.3 Weighting caps
K.7.3.1 Necessity for weighting caps
Table K.11 ā€” Example of sample with extreme deviation from the real-mail distribution
Table K.12 ā€” Corrective factors at the stratum level for the SWB in a case of major deviation
K.7.3.2 Caps applied at the mode level
109 Table K.13 ā€” Lower and upper bounds for the marginal sampling proportion of the modes of the geographical strata
Table K.14 ā€” Lower and upper bounds for the marginal proportion of the modes of the MC DMC
110 K.7.3.3 Caps at the item level
K.8 Reporting of results
K.8.1 Reporting
111 K.8.2 Archiving
K.9 Audit [SCMS]
K.9.1 General
112 K.9.2 The auditor
K.9.2.1 Position of the auditor
K.9.2.2 Selection of the auditor
K.9.3 Audit report
113 K.9.4 Frequency of audit
114 Annex L (informative) Application of the accuracy calculation
L.1 Limitations of the accuracy calculation methods provided
L.2 Recommendations for the application of the rules
L.2.1 Accuracy
L.2.2 Unstratified end-to-end sample
115 L.2.3 Stratified simple random sample
Table L.1 ā€” Example of a stratified sample
116 L.2.4 Approximation of the Binomial distribution
L.3 The sample size
117 L.4 General Example for a national yearly result
L.4.1 The example
Table L.2 ā€” Mail-flow matrix from panellist S1-S4 to panellist R1+R2
118 Table L.3 ā€” Mail-flow matrix from panellist S1-S4 to panellist R3+R4
L.4.2 Design factor for an unstratified end-to-end sample
119 Table L.4 ā€” Input parameters for the variance calculation
120 L.4.3 Design factor for a stratified random sample
Table L.5 ā€” Standard weighting basis
Table L.6 ā€” Simplified weighting basis
Table L.7 ā€” Corrective factors
121 Table L.8 ā€” Variance of the stratified sample * 802
L.4.4 Accuracy calculation
L.4.4.1 General
L.4.4.2 Normal confidence interval
122 L.4.4.3 Alternative confidence intervals
123 Table L.9 ā€” Comparison of confidence intervals
L.5 Simplified scenarios
L.5.1 Transit time results up to 96 %
L.5.2 Fully proportional sample
L.5.3 Induction / delivery point with only one letter
124 Annex M (informative) Changes to the 2003 version of EN 14534
M.1 Reasons for the review
M.2 Increased applicability
M.2.1 New focus
M.2.1.1 The induction based measurement
M.2.1.2 Aggregated and continuous measurement systems
125 M.2.2 New concepts for the date of induction
M.2.3 New quality performance indicators
M.3 Updated methodology
M.3.1 Added assistance in the determination of the date of induction
M.3.2 New methodological insights from EN 13850:2012
M.3.2.1 General
M.3.2.2 Accuracy and Minimum Sample Size (MSS)
126 M.3.2.3 Transit-time calculation rule
M.3.2.4 Improved applicability of the accuracy calculation method
M.3.2.5 Reduced bias in the accuracy calculation
127 Bibliography
BS EN 14534:2016
$215.11