Atr 01 Assignment Satisfaction


IN THE UNITED STATES DISTRICT COURT
FOR THE DISTRICT OF DELAWARE



UNITED STATES OF AMERICA,    

                  Plaintiff,

                  v.

DENTSPLY INTERNATIONAL, INC.,   

                  Defendant.


|
|
|
|
|
|
|
|
|
|
|
|         
Civil Action No. 99-005 (SLR)

Stamp: FILED
Jun 14 2 47 PM '00
CLERK
U.S. DISTRICT COURT
DISTRICT OF DELAWARE

EXPERT REPORT OF JERRY WIND. PRESIDENT. WIND ASSOCIATES

Dated: June 14, 2000

COUNSEL FOR PLAINTIFF
UNITED STATES OF AMERICA

CARL SCHNEE
UNITED STATED ATTORNEY

Judith M. Kinney (DSB #3643)
Assistant United States Attorney
1201 Market Street, Suite 1100
Wilmington, DE 19801
(302) 573-6277

Mark J. Botti
William E. Berlin
Jon B. Jacobs
Sanford M. Adler
Frederick S. Young
Dionne C. Lomas
Eliza T. Platts-Mills
Adam D. Hirsh
United States Department of Justice
Antitrust Division
325 Seventh Street, N.W., Suite 400
Washington, D.C. 20530
(202) 307-0827


1041 Waverly ROad
Gladwyne, PA 19035

Tel: 610 642-2120
Fax:610 642-2168

DENTAL LAB TECHNICIANS PREFERENCES AND TRADEOFFS
AMONG COMPETITIVE MANUFACTURERS OF ARTIFICIAL TEETH

FEBRUARY 29, 1999


TABLE OF CONTENTS

  1. Qualifications, Background and Objectives
  2. Research Approach
    1. Research Design
    2. Universe and Sample
    3. The Questionnaire
    4. The Stimuli
    5. Data Collection
    6. Data Analysis
  3. Results
  4. Conclusions

Appendices

  1. The PRIDEM Model
  2. Master Experimental Design [140 cards, in 20 blocks of 7 each]
  3. Illustrative 7 Stimuli Cards [out of a total of 140] plus a base card
  4. The Screener Questionnaire
  5. The Main Questionnaire
  6. Field Instructions
  7. The Screening Results
  8. Curriculum Vitae

I. QUALIFICATIONS, BACKGROUND AND OBJECTIVES

My qualifications, including a list of my publications and past testimony at trial and by deposition, are contained in my curriculum vitae. which is attached as Appendix H to this report. I am compensated at the rate of $750 per hour for my work in this case.

As I understand the issues in this case, Dentsply International is the leading producer of artificial teeth in the U.S. market. Estimates of its current share ranges from 65 to 80 percent of the U.S. market. Currently, Dentsply distributes its teeth through dealers, and if a dealer adds a competitive line of prefabricated artificial teeth, Dentsply severs its relationship with that dealer.

Other producers of artificial teeth, such as Vita and Ivoclar, have had a difficult time in penetrating the U.S. market. What is not known about this market is the role that the following marketing variables play in influencing dental laboratory technicians' choices of artificial teeth brands:

  • Type of distribution - dealers versus direct distribution by manufacturer
  • Price

The objective of this study that I have been asked by the United States to conduct, is to produce an appropriate representative sample and generate data that will provide a basis for establishing empirically the relative importance of these factors and a basis for estimating the expected share of Dentsply and its competitors under various scenarios of the above two marketing variables.

In addition to the data generated by the survey I conducted, which is incorporated into this report, I considered other documents including various documents generally describing the artificial tooth market, marketing and promotional materials for several brands of prefabricated artificial teeth, and the Complaint filed by the United States in this case.

II. RESEARCH APPROACH

A. Research Design

I designed and directed a study among dental laboratory technicians as a tradeoff conjoint study focusing on key variables associated with the product, distribution and price offerings of seven suppliers: Dentsply, Vita, Ivoclar, Myerson, Universal, Kenson and Justi.

The study is designed as a conjoint analysis study that generates the data providing a basis for various types of modeling including the PRIDEM and PRIDEL Models. The paper documenting these models is included in Appendix A attached to this report.

The study was designed as a TMT (Telephone recruitment, Mail follow-up and main Telephone interview) study.

B. Universe and Sample

Universe

The universe was defined as dental lab technicians responsible for the selection of plastic artificial teeth purchased by the lab for use in making dentures.

Sample

A representative national probability sample was selected from the universe of dental lab technicians responsible for the selection of plastic artificial teeth purchased by the lab for use in making dentures. The sample size included 274 respondents, one per laboratory.

The sampling procedure involved 3 phases:

  1. A national sample list of 10,000 dental laboratories was obtained from Survey Sampling, Inc. located in Fairfield, CT. Their business and professional lists are compiled from continuously updated yellow page listings, nationwide. The list of 10,000 laboratories was divided into 10 replicates of 1,000.
  2. Guideline Research, a New York-based national marketing research firm with whom I have worked on similar studies, received the 10 replicates and opened one replication at a time, with the instruction to stert at the beginning of the first replicate and work through all the names in that replicate before beginning the second replicate. A total of 2,520 respondents were called to generate the final sample of respondents. A minimum of 3 dialing attempts was made for each number before eliminating that number from the sample.
  3. The respondents were screened to meet the following requirements:
          Laboratory must fabricate dentures using plastic artificial teeth
          Respondent must be responsible for selecting the plastic artificial teeth that the laboratory uses
          Respondent had to be willing to participate (accept receipt of the packet)

The detailed screening results are included in Appendix G.

C. The Questionnaire

The main part of the questionnaire was Part B which consisted of a tradeoff exercise in which each respondent received 8 stimuli cards containing experimentally designed scenarios describing Dentsply and each of its competitors. Each competitor was named - Dentsply, Vita, Ivoclar, etc. The starting, or reference scenario - Exhibit 1 on the following page - showed base price and distribution for each supplier.1 Subsequent experimentally designed scenarios varied price and distribution mode for the various brands to measure the respondent tradeoffs. For each scenario, the respondent was asked to allocate 100 points across the suppliers, reflecting his/her total plastic tooth purchases from each supplier over the next three months given the distribution and price conditions shown.

In addition, in Part A of the questipnnaire, a selected set of respondent background characteristics (lab size, shares of current tooth suppliers, years in business, etc.) and preferences were collected.

Part A of the questionnaire and the instructions for responding to Part B are included in Appendix E.

C141

Exhibit 1: Sample Scenario Card

5-7
(1)
Plastic Teeth
BRAND/LINE
(2)
Anterior Card
(1 x 6)
PRICE IN $
(3)
Available From
(4)
Your Response:
SHARE OF
PURCHASES
(PERCENT)
LOCAL DEALERMAIL-ORDER DEALERMANUFACTURER DIRECTLY
Dentsply BIOFORMIPN24.18YesYesNo8-10
Dentsply BIOBLEND IPN26.34YesYesNo11-13
Dentsply CLASSIC3.90YesYesNo14-16
Dentsply PORTRAIT IPN26.28YesYesNo17-l9
Dentsply TRUBLEND SLM27.78YesYesNo20-22
Ivoclar SR VIVODENT PE25.05NoNoYes23-25
Justi BLEND12.84YesYesYes26-28
Kenson RESIN3.75YesYesYes29-31
Myerson DURABLEND SPECIAL RESIN19.95YesYesYes32-34
Universal VERILUX24.40YesYesYes35-37
Vita VITAPAN29.01YesNoYes38-40
Total=100 points

[This is the reference (C141) card sent to all respondents]

D. The Stimuli

Each respondent received 8 Scenario cards. The top card in each set is the reference card, C-141; the remaining cards represent a block of 7 (shuffled) cards drawn from a block of the master design (Appendix B). Exhibit 1, shows C-141, the reference card. The 140 experimental cards (numbered from C-1 through C-140) appear in 20 blocks of 7 cards each; their designs appear in Appendix B - the master experimental design. Appendix C contains seven illustrative stimuli cards.

Pricing

The first 8 columns of the master experimental design contain the pricing information for the 8 primary brands; the 4 Dentsply brands - BIOFORM IPN, BIOBLEND IPN, PORTRAIT IPN, TRUBLEND SLM, and Ivoclar SR VIODENT PE, Myerson DURABLEND SPECIAL RESIN, Universal VERILUX and Vita VITAPAN, respectively. The prices for Dentsply CLASSIC, Justi BLEND and Kenson RESIN are fixed throughout the design at the prices shown in the reference scenario, Card C-141, in Exhibit 1. The experimental prices for the 8 primary brands are shown in Exhibit 2. As noted, there are four price levels for each primary brand. As noted above, the 3 secondary brands have fixed prices - the same as those shown in Exhibit 1.

Distributor Availability

Exhibit 3 shows the master coding for availability. There are 7 levels for this variable. Columns 9 through 13 of Appendix B show the availability codes for the active products. Column 9 is applicable for all of the five Dentsply brands' availability. Columns 10 through 13 of Appendix B show the availability coding for SR VIVODENT PE, DURABLEND SPECIAL RESIN, VERILUX and VITA, respectively. Justi BLEND and Kenson RESIN both received an availability coding of: Yes - Yes - Yes throughout all of the experimental cards.

Exhibit 2: Brands and prices

Dentsply Bioform IPN(1) $19.44(2) $21.76(3) $24.18(4) $26.60

Dentsply Bioblend IPN(1) $21.07(2) $23.71(3) $26.34(4) $28.97

Dentsply Portrait IPN(1) $21.02(2) $23.65(3) $26.28(4) $28.91

Dentsply Trublend SLM(1) $22.22(2) $25.00(3) $27.78(4) $30.56

Ivoclar SR Vivodent PE(1) $20.04(2) $22.55(3) $25.05(4) $27.56

Myerson Durablend Special Resin(1) $15.96(2) $17.96(3) $19.95(4) $21.95

Universal Verilux(1) $19.52(2) $21.96(3) $24.40(4) $26.84

Vita Vitapan(1) $23.21(2) $26.11(3) $29.01(4) $31.91
Denstply Classic$ 3.90

|
|
|
|
|
  Fixed
Justi Blend$12.84

Kenson Resin$ 3.75

Exhibit 3: The Availability Attributes


Design LevelLocal DealerMail-Order DealerManufacturer Directly

1YesYesYes
2YesYesNo
3YesNoYes
4YesNoNo
5NoYesYes
6NoYesNo
7NoNoYes

E. Data Collection

The data collection was conducted at my direction by Guideline Research.

The data collection consisted of a TMT design in which respondents were screened by telephone (screener questionnaire is included in Appendix D). Following that, they were sent by priority mail 2 envelopes consisting of a questionnaire (Appendix E) and 8 stimulus cards (one of the 20 blocks of the master design outlined in Appendix B and illustrated in Appendix C). Four to five days later they received a second telephone call and instructions on how to complete the forms. They were asked to return the completed forms and the stimuli to Guideline Research.

The field instructions were prepared by Guideline Research and are included in Appendix F. Neither the interviewers nor the respondents were informed of the purpose or sponsor of the study.

The data generated by the responses to Parts A and B of the questionnaire is incorporated into this report and contained, respectively, in the computer files titled "dental.as1" and "dental.as2."

F. Data Analysis

One method that can be used to analyze the data produced by this survey is the PRIDEM/PRIDEL model developed by Paul Green and Abba Krieger (and described in Appendix A). This model enables the user to "parse out" the effect of each experimentally manipulated variable on respondents' tradeoffs across suppliers. Respondent background attributes from Part A of the questionnaire are included in the model in order to examine their segmentation effects.

The initial analysis focused on 3 scenarios:

  1. Base scenario: Current market conditions at the time the survey was designed and conducted, as reflected in the base stimulus card (C-141) with one exception: that Vita is not available from a local dealer (for the reasons noted in the section above describing the questionnaire).
  2. Scenario 2: Vita and Ivoclar are available from a local dealer and from the manufacturer directly but not from a mail order dealer, and the price variable will remain the same as the base scenario for Vita and Ivoclar (all price and distribution variables remain the same as the base scenario for each of the other brand/lines);
  3. Scenario 3: Vita and ivoclar are available from all three distribution options: local dealer, mail order dealer, and from the manufacturer directly, and the price variable will remain the same as the base scenario for Vita and Ivoclar (all price and distribution variables remain the same as the base scenario for each of the other brand/lines).

III. RESULTS

The initial results produced by the PRIDEM/PRIDEL model for three key scenarios are included in Exhibit 4 on the following page.

This model allows for the calculation of expected share for any scenario based on the factors and levels included in the study (and listed in Exhibits 2 and 3).

Exhibit 4

Estimated Market Share Results for Ivodar and Vita Under Three Scenarios

The ScenariosIvodarVita
ShareRelative Gain
vs. Scenario 1
ShareRelative Gain
vs. Scenario 1
1

Base Scenario
Current market conditions as reflected in the base stimulus card (C-141) with one exception, that Vita is not available from a local dealer

5.05%-------3.37%-------
2

Scenario 2
Vita and Ivodar are assumed to be available from a local dealer and from the manufacturer directly but not from a mail order dealer; the price variable remains the same as the base scenario for Vita and Ivodar (all price and distribution variables remain the same as the base scenario for each of the other brand/lines)

6.84%35%3.69%9%
3

Scenario 3
Vita and Ivodar are available from all three distributionoptions: local dealer, mail order dealer, and from the manufacturer directly; the price variable remains the same as the base scenario for Vita and Ivodar (all price and distribution variables remain the same as the base scenario for each of the other brand/lines)

6.25%24%4.44%32%

IV. CONCLUSIONS

This study, using conjoint analysis methodology, which is generally accepted and commonly used for assessing respondent's trade-offs among key marketing variables, was designed to generate data that allows analysis to:

  1. Establish empirically the relative importance of types of distribution and price in dental laboratory technicians1 choices of artificial teeth brands
  2. Estimate the expected share of Dentsply and its competitors under various scenarios of the above two marketing variables

The survey was conducted at my direction according to generally accepted professional scientific standards for survey research used in litigation in order to assure the objectivity of the entire process.

The findings of the study using the PRIDEM/PRIDEL model as summarized in Exhibit 4 show that:

  • In the base case reflecting generally prevailing market conditions at the time the survey was designed and conducted, in which Ivoclar and Vita are unavailable from local or mail order dealers, the expected share of
    • Ivoclar is 5.05%
    • Vita is 3.37%
  • In a scenario in whjch Ivoclar and Vita are available from only local dealer and manufacturer, the share of
    • Ivoclar is 6.84%, an increase of 35%
    • Vita is 3.69%, an increase of 9%
  • In a scenario in which Ivoclar and Vita are available from all three distribution modes manufacturer, mail order or local dealer, the share of
    • Ivociar is 6.25%, an increase of 24% vs. the base case
    • Vita is 4.44%, an increase of 32% vs. the base case




_______________/s/________________
Yoram Jerry Wind, Ph.D.

FOOTNOTES

1 The reference scenario card showed Vita VITAPAN as being available from a local dealer, because for some labs this is technically accurate; however, as I understand the facts, this is not the existing market condition for the majority of labs in the United States, and was not at the time the data was collected. This was, accordingly, adjusted for during the analysis (as discussed in section F, page 12).


APPENDIX A

BACKGROUND ARTICLE ON THE PRIDEM AND PRIDEL MODELS


European Journal of Operational Research 60 (1992) 31-44
North-Holland

Theory and Methodology


Modeling competitive pricing and market share: Anatomy of a decision support system *

Paul E. Green and Abba M. Krieger

The Wharton School of the University of Pennsylvania, Steinberg Hall-Dietrich Hall, Philadelphia, PA 19104-6371, USA

Received December 1989; revised July 1990

Abstract: Even with today's high emphasis on management decision support systems, relatively little has been published on the motivations, tribulations, and post mortems that generally accompany the development of such systems. This paper reports (in a mixture of narrative style and more formal model exposition) how a computerized decision support system for optimal price determination was developed, implemented, and finally applied to a broad range of industry problems.

Keywords: Pricing; demand analysis; conjoint analysis; multiattribute preference

1. Introduction

Product and service pricing is one of the oldest (and still very important) tools of the marketing executive. Each day business firms face questions like the following:

1. Competitor X has just increased its price by five percent. Should we match it, or stand pat?

2. How should we price our new product, which offers several technical advantages over current offerings?

3. How should we set prices among competing items in our current product line?

Answers to these questions are hard to find for at least two reasons. First, given a set of existing prices and market shares for products in a competing product class, it is often difficult to predict new shares accurately if one or more prices were to change. Second, it is difficult to predict how competitors will react to others' price changes.

Marketing researchers have dealt with the first question by proposing several new marketing research methods that show promise for augmenting information obtained from older approaches. Historically, the measurement of price-demand relationships has relied on statistical methods applied to either cross-sectional or time series data (e.g., Wittink, 1977). However, newer approaches, such as those based'on laboratory studies (Pessemier, 1960), instore experiments (Doyle and Gidengil, 1977), test market simulation (Silk and Urban, 1978), and willingness-to-pay surveys (Monroe and Delia Bitta, 1978) have received increased attention and, in some cases, commercial application.

Even more recently, conjoint-based methods (Mahajan, Green, and Goldberg, 1982; Louviere and Woodworth, 1983; Wyner, Benedetti, and Trapp, 1984) have considered explicitly designed competitive product profile descriptions. Respondents either pick their preferred choice from the set of alternatives, indicate their likelihood of choosing each option, or state their preferences for alternative allocations of a common resource across products or activities competing for that resource (Carroll, Green, and DeSarbo, 1979).

A prototypical procedure entailing tradeoff techniques is that proposed by Mahajan, Green, and Goldberg (MGG). Their survey data collection approach is a modification of one originally proposed by Jones (1975). Respondents are shown profile descriptions of P products, each with an associated brand name and price. Profiles are designed according to fractional factorials in which attributes and levels are idiosyncratic to each brand. Respondents allocate a constant sum (typically 100 points) across each stimulus option indicating the likelihood that they would choose each option, given the stated prices for each alternative. MGG employ a conditional logit model (Theil, 1969) to estimate parameter values that satisfy the following sum and range constraints:

1. The estimated probability of choosing some p-th brand ranges between zero and one.

2. The sum of the choice probabilities across all P brands (including an all-other-brands category, if appropriate) equals unity.

While MGG discuss how their approach might be used in actual business situations, they also add that their experience with applying the model to real-world problems is quite limited.

1.1. Inputs from the business world

As we mulled over the idea of adapting some of the conjoint methodology to the measurement and strategy issues related to optimal pricing, we sought the advice of several commercial marketing research firms. Gradually, a pattern emerged regarding their views about what managers would like to see in a price/demand model:

1. The approach should be able to utilize survey methods, similar to the kinds of buyer trade off data that are collected in applied conjoint studies.

2. The model should be able to consider not only the impact of price changes on market share but also the effect on share of non-price attribute changes on the part of any or all competitors.

3. The model should be able to make market share predictions at both the total market and individual market segment levels.

4. The model should be capable of examining interactions among different competitors' prices and non-price attribute levels.

5. The model should be 'decomposable' in the sense of allowing the client to focus attention on the behavior of a single product's share as a function of individual competitors' prices and non-price activities.

6. The model should be capable of being calibrated to actual starting (i.e., existing) market shares and prices.

7. The model should contain an 'optimizing' feature in which the user can find the best price for a given product, conditional on fixed prices for competitors and specified levels of all non-price attributes, self and competitors.

8. The model should be flexible enough to allow interpolation across discrete price points.

9. The model should be user friendly and, if possible, adaptable to a personal computer.

With these desiderata in mind, we set about the task of constructing a suitable data collection method, parameter estimation technique, and price optimizing routine.

1.2. Borrowing from the past

Fortunately, earlier work in componential segmentation (Green, Krieger, and Zelnio, 1989) led to the development of a conjoint model for forecasting buyers' likelihoods of purchase from information about product attribute preferences and buyer backgrounds (e.g., demographics, life styles, current brand usage, etc.). The PROSIT (PROduct SITuation) model contained a number of relevant features for the current effort, namely PROSITs ability to estimate parameter values for both product attributes and buyer attributes, as well as selected two-way, within-set and between-set interactions.

Furthermore, PROSIT contained an optimizing feature wherein one could find the best product profile for a given market segment or the best segment for a given product.

In the PROSIT model, all parameters are estimated as though each predictive variable is categorical (i.e., predictors are treated as dummy-variables in the spirit of conjoint analysis). The response variable is 'univariate' - typically, a buyer's subjective likelihood of choosing a specific and or service supplier as a joint function of product profiles (for that brand and competitive brands) and respondent background varibles.

In contrast, our present problem emphasized an underlying continuous variable (i.e., price) and entailed a 'multivariate' response, namely, the respondent's subjective likelihood of choosing each of P products as a function of their product attributes, their prices, and the buyer's background attributes. Still, the earlier PROSIT model seemed like a good place to start. On the plus side, it had been successfully applied in a variety of industrial applications (particularly in the phamaceutical and computer industries) and had already been adapted for interactive, personal computer applications.

2. Decisions, decisions

At this point we had a starting point for the PRIce-DEMand model (PRIDEM). However, a number of decisions still had to be made on adapting the PROSIT model for pricing and, in particular, incorporating a multivariate response variable, PROSIT was estimated by OLS dummy variable regression. Its optimizer employed a heuristic for finding optimal combinations of product and/or segment attribute levels from the full Cartesian product set of attribute levels. In contrast, our current interest centered on price optimization, conditional on given settings of all other attributes.

2.1 Handling the price attribute

In keeping with conjoint methodology, it seemed appropriate to maintain the treatment of all attributes (including price) as categorical, encoded as dummy variables. From a pragmatic viewpoint this would allow us to use a portion of the same software already in place for fitting the PROSIT model. Second, we could avail ourselves of highly efficient, fractional factorial designs for setting up the product and price stimulus design that would estimate all main effects as well as selected two-way interactions. Moreover, these (orthogonal) designs are flexible enough to accommodate enough price levels (e.g., five in nine per brand) to approximate a continuous part-worth function rather closely.

Why not just select a polynomial (e.g., quadratic) to represent part worths for the price variable? One of the problems with this approach is that the resulting curve is sensitive to error. In fact, il is possible that the fitted curve could depart rather markedly from the actual response associated with the experimentally designed price points. Clearly, with polynomial fitting there is no requirement that the curve 'go through' the response value observed at each discrete experimental price point.

In contrast, by using splines we could make sure that the response function passed through the knots (i.e., price points). Furthermore, we could make the function smooth between each pair of knots so that simple (classical) methods ol optimization could be used to find the solution that maximized the sponsors contribution to overhead and profit (conditional on fixed prices for competitive products).

Accordingly, we set up a computer routine for fitting one and two-dimensional splines where the knots represented the discrete price levels used in the original experimental design underlying the competitive product profiles. The Appendix describes this procedure.

2.2. Making the model multicariate

A second problem with the adaptation of PROSIT to PRIDEM was how to deal with the multivariate response variable. The PROSIT model is fit by ANOVA-like, OLS regression. If the original PROSIT response variable were quantal (e.g., 1 or 0) or if the response were each respondent's subjective likelihood of purchase on a 0 to 1.0 scale, no attempt was made in PROSIT to transform it to a logit (as was done in Mahajan, Green, and Goldberg, 1982).

Why not, then, set up a multinomial logit model with brand and product interactions, rather than fitting individual linear probability models and then finding market shares on a post hoc basis? We chose to maintain OLS fitting and the linear probability model for two reasons. First, the PROSIT model has a rather elaborate, built-in cross validation feature which we wished to retain for assessing the predictive accuracy of prices, market segments, and non-price attributes on a single products likelihood of purchase. This could be applied to each product, in turn, as a way to see if some part worth functions are poorly estimated.

Second, despite the theoretical attractiveness of the multinomial logit (see Mahajan, Green, and Goldberg, 1982), we noted that Brodie and De Kluyver (1984) have reported that linear probability models, with post hoc adjustment (to respect non-negativity and sum constraints), have fared as well as the more complex multinomial logit models in terms of empirical market share validation. (Still, it should be mentioned that the current structure of PRIDEM could be reformulated in terms of a multinomial logit.)

With these preliminary decisions made, it was then time to formulate the model.

3. The PRIDEM model

To motivate our description of the PRIDEM model, consider a situation in which a pharmaceutical firm wishes to increase the price of its antihypertensive drug brand A. There are five other competing brands in the market niche of interest to brand A's producers: B, C, D, E, and F. The producers of brand A are able to estimate per-unit variable production/distribution costs for each of the six competitive brands.

In designing the marketing research survey, brand A's producers considered four price levels each for brands A, B and C, three levels each for D and E, and two levels for the more remote competitor, brand F. In addition, they selected one three-level non-price attribute describing brand A's dosage schedule: once daily, twice daily, or three times daily.

A conjoint orthogonal design of 64 profile descriptions was set up. Each respondent received eight of the profile descriptions, drawn from the master design. For each description the respondent was asked to allocate 100 points across the six competitive brands so as to reflect the proportion of hypertensive patients for whom each drug would be prescribed, under the stated conditions. (Prior to this task each respondent similarly evaluated a base-case profile showing the current prices of each brand and brand A's current dosage level of three times daily.) Figure 1 shows an illustrative stimulus card for one of the experimental conditions.

Total 100%
Current price per day's therapy+5%+10%+15%
Brand A dosage twice/day$1.97
Brand B

$1.88
Brand C

$2.12
Brand D

$2.29
Brand E

$2.09
Brand F

$2.09

Figure 1. Illustrative stimulus card

Respondents were classified, a priori, by five segment attributes: specialty (cardiologists versus general practitioners); age (under 35, 35 and older); type of practice (solo versus group): current brand favorite (brand A versus others): and patient load, within specialty (above median versus below median).1

3.1. Preliminaries

In describing the PRIDEM model more formally, we first consider the question of estimating the market shares for each brand, as a function of manipulated product/price variables and respondent characteristics. Market shares are assumed to depend on three types of attributes: (a) market segment attributes; (b) non-price (e.g., product) attributes; and (c) price attributes. We let

l1 , l2 ,...., lS

denote the number of levels associated with each of the S segment attributes. In the illustrative problem these attributes describe the decision makers, such as specialty (cardiologist, general practitioner), age (under 35, 35 and over), and so on.

Similarly, we let

m1 , m2 ,...., mr

denote the number of levels associated with each of the T non-price attributes. In the illustrative problem there is only one non-price attribute: dosage (once daily, twice daily, three times daily).

Finally, the market shares are also assumed to depend on the brands' prices. We assume R P price attributes; this allows for the case in which a subset of size P - R of the P brands does not vary with respect to price. We let

n1 , n2 ,...., nR

denote the number of levels of each of the R price attributes. To simplify notation, we further assume that the brands are ordered, so that brand i refers to the brand whose price is varying in price attribute i. Associated with each level of each price attribute an actual price (e.g., m dollars per day's therapy). Prices are denoted by:

llrj    r = 1, 2.....R:   j = 1, 2.....n?

We shall use i, j, and k to subscript attribute levels in general.

3.2. Segment components

The segment attributes are used to define the universe over which the market shares are computed. We specify a segment by assigning selected attribute levels to the S segment attributes. We can combine segments by aggregation. More generally, we can construct any universe of interest by a set of non-negative segment weights. wsj,    s = 1, 2.....S:   j = 1, 2.....l,

so that

= 1    for s = 1, 2.....S.

In particular, a given segment with levels (i1 , i2 ,...., iS) is captured by setting

wj??    for j = 1, 2.....S

and

wj?k = 0; otherwise.

Through the use of weighting coefficients PRIDEM can select a specific weighted universe across all attributes with (say) weights of 0.7 and 0.3 for cardiologist and GP, respectively, and weights of 0.2 and 0.8 for under 35 years and 35 years or older, respectively. Given the five two-level background descriptors, described above, we have a maximum of 32 distinct segments.

Later on, we shall describe how the 'optimal' price for each product is determined, given stated prices for all other brands. The optimal price will be defined by the value that maximizes contribution to overhead and profit, defined by the expression:

Industry sales units • (price - variable cost/unit of brand p ) • (market share of brand p). (In applying the computer-based PRIDEM model, industry sales are usually set, for convenience, at 1.0.)

3.3. Market share model

Associated with each brand p is an estimated market share

f(p)(k; j, w)

where k is a vector (of length R) of prices, j is a vector (of length T) of levels for the non-price attributes, and w is S vectors (of respective lengths l1 , l2 ,...., lS) denoting the universe of decision makers (i.e., the physicians).

The function f(p) is obtained by first fitting a main effects model to the raw response data (i.e., the likelihood of prescribing the p-th brand in question) where the predictors are the S segment attributes, the T non-price attributes, and the R price attributes, all expressed as dummy variables. Selected two-way interactions are then added to the model in a sequential, stagewise manner (Green and DeSarbo, 1979). As noted above, interactions can be either within segment, non-price, or price attributes, or between segment, non-price, or price attributes.

The fitting of main effects and interactions yields a set of regression-based functions h(p), p = 1, 2 ,..., P, one for each product, as a preliminary step toward obtaining f(p). We first discuss how each h(p) is obtained and then how it is adjusted to find f(p). The model is described, in part, by its L interactions. As noted earlier, interaction l* can be of several differing types:

() with .

We have the combinations as shown in Table 1.

Attribute levels with associated interaction are denoted by (). For example, if q11 = 2, q12 = 3, u11 = 3, and w12 = 4, then the first interaction is between the third non-price attribute and the fourth price attribute.

Table 1


Nature of interaction

11Segment by segment attribute
12Segment by non-price attribute
13Segment by price attribute
22Non-price by non-price attribute
23Non-price by price attribute
33Price by price attribute

Describing the formal regression model for estimating each individual product's h(p) is a bit messy because of the large variety of possible interaction terms. We define h(p) as

(1)

where

A(p) denotes the intercept term for product p's function,

denotes the main effect partworth for level is of segment attribute s,

denotes the main effect partworth for level jt of non-price attribute t.

denotes the main effect partworth for level kr of price attribute r, and

(i, j, k;, , , ) is an entry in the matrix associated* with interaction l*. The specific entry depends on i, j, k,, , and

3.4. Base-case calibration

To calibrate each individual brand model, we adjust each h(p) obtained from the individual product regressions to a base-case profile. This is accomplished by finding h(p) for this profile and then multiplying all the parameters (A, B, C, D, E) by b(p)/h(p) where b(p) is the given market share for the base-case profile.

Finally, we obtain the market share function f(p) from h(p) by normalizing the individual h(p) values by means of the function

(2)

where (x)+ = max(x, 0), wi = and h has been previously adjusted to base-case market shares, as described above. Note that if h(p) (i, j, k) = 0 for all p, then f(p) (k, j, w) = 1/P.

3.5. Additional remarks

As noted earlier, we fit each h(p) regressior function as a simple linear probability model in which predicted values need not obey a 0-1 range constraint; simple OLS regression is employed. Similarly, f(p) is obtained by a normalizing procedure which simply insures that all of the individual h(p) predicted values are non-negative. (As described earlier, other procedures, including multinomial logit, could be used.)

It should also be pointed out that the sequential fitting of two-way interactions requires that attention be paid to the significance testing of additional terms. This is implemented by procedures described in Green and DcSarbo (1979). In addition, each individual h(p) model is cross-validated at each stage in the interaction fitting procedure. Cross-validated predictions are employed as the principal guide to the selection of appropriate interaction terms, once the main effects have been fitted.

4. Price interpolation and optimization

There are two remaining aspects of the model that are not explained fully by (1) for h(p). (Since the discussion below applies to all p, we now omit the superscript.) We first note from the preceding discussion that market shares can only be predicted at the price levels associated with the price attributes. It is desirable to be able to interpolate, i.e., to predict market shares at prices , r = 1,..., R, that are not limited to the original . Second, we have not discussed how to find the optimal prices , r = 1,..., R.

4.1. Interpolation procedure

The solutions to both of these problems de pend upon the method of interpolation between successive price levels and . To this end, we assume that the weightings over seg ments and levels for non-price attributes are fixed, in any given run of the model. The function, h, can then be written as h(,..... ) where ,..... denotes prices for the R price at tributes. Since we fit an additive model with interactions, we can write.

(3)

where A includes the intercept, and the main effects for segments and nonprice attributes and interactions that do not involve price attributes: g, includes the main effect for price attribute r and all interactions involving the r-th price attribute with a segment attribute or a non-price attribute: g refers to the interaction between the r-th and s-th price attributes (where g?? = 0 if this interaction does not appear in the model).

The function s is R-dimensional with known values on a lattice of points , r = 1.....R, j = 1.....nr. We could fit a spline (Greville. 1969: Rice, 1969) to h, treating the as the knots: however, we would not be using all of the known information. Since g and grs are known at , and (, ) we can fit one and two-dimensional splines respectively to these functions, thus determining h. The Appendix describes how this is done.

4.2. Finding the optimal value for price

From discussion in the previous sections (and the Appendix), we only need to consider between two knots. Hence,

(4)

where includes the assumed specified values, for the prices of the remaining p - 1 products. Hence, the market share for product p is

(5)

and the objective function, , can be written as:

(6)

It is straightforward to solve (6) when Z1 () = 0, which is an equation of order 2K, and compare these results to the values at the knots. In particular, if K = 1, then

Hence,

where

We find the two points such that Z1() = 0 for for j = 0, 1..... n - 1. Finally, we compare Z at these two points, provided that the points are in the appropriate (, ) to determine .

5. A real-world application

The PRIDEM model and decision support system have been implemented on both the main frame (Vax 8700) and the personal computer. A number of industrial applications have been made of PRIDEM over the past three years. We illustrate PRIDEM's application with an actual industry example involving two pharmaceutical companies' pricing strategies in the marketing of a diet supplement for use by hospital patients who have trouble swallowing traditional foodstuffs. (All data have been disguised to respect sponsor confidentiality.)

5.1. Study background

For several years, only one pharmaceutical firm, hereafter called Alpha, had been marketing a special diet supplement for hospital patients with esophagus ailments. The product was designed for drinking (through a straw); it contained a balanced set of nutrients. More recently, a second firm, hereafter called Beta, had developed its own diet supplement. Several of its product properties differed from those of Alpha as well as its marketing and pricing plans.

Prior to Beta's entry, Alpha's ongoing price for its diet supplement was $41 per day per patient. Beta believed that Alpha's short run monopoly could be upset by penetration pricing; accordingly. Beta introduced its product at only $38 per individual per day. The results were dramatic; in two years. Beta had penetrated the market to such an extent that the two firms' shares were 28% and 72%, respectively, for Alpha and Beta. At this point, Beta wondered whether its price was 'right' (in the sense of optimizing its contribution to overhead and profit) and what the implications might be if Alpha were to change its still-current price of $41.

5.2 Designing the conjoint surrey

A conjoint study was designed to obtain data for use in the PRIDEM model. First, Beta personnel discussed possible non-price attributes that could affect market shares independently (or possibly interactively with price). Four such attributes were identified:

1. Packaging for Alpha: 4-ounce can (current) versus 6-ounce can (prospective); Beta's dosage was already at 6 ounces per can.

2. Extended contract price guarantee for Alpha: NO (current) versus YES (prospective); Beta had no price guarantee.

3. Concentration of amino acids for Beta: low concentration (current) versus high concentration (prospective); Alpha's concentration was already slightly lower than Beta's current concentration.

4. Educational aids for Beta: NO (current) versus YES (prospective); Alpha already had educational aids.

In addition to the non-price attributes, Beta's management considered several possibilities for identifying market segments. Management settled on two primary segmentation bases:

1.Type of respondent (i.e., as an influence on which brand is purchased)

  1. nurses.
  2. doctors,
  3. hospital pharmacists.

2. Size of hospital

  1. Large (over 500 beds).
  2. small (fewer than 500 beds).

Finally, Beta management estimated that variable costs for producing and distributing the diet supplement were about equal between the two firms; they estimated these costs at $19 per individual per day.

A sample of 390 respondents was selected, according to the two stratifying criteria (profession and hospital size). All interviews were conducted by personal administration, following pre-arranged appointments. Respondents received honoraria for their participation.

The conjoint portion of the interview was based on a master orthogonal experimental design of 50 profile cards. Each profile card contained information on brand names, non-price attribute levels under each brand name, and prices per patient day. The prices were drawn from the following sets:

  1. Alpha - $43, $41 (current), $34, $28. $21.
  2. Beta - $41, $38 (current), $34, $28, $21.

Each respondent first received a 'base case' profile card, followed by five cards (balanced with respect to prices) from the overall orthogonal design. For each card, the respondent was asked to indicate what his/her recommendation would be to purchasing agents responsible for choosing the diet supplement supplier. Each respondent was asked to split 100 points (constant sum scale) between the two potential suppliers, reflecting their likelihood of recommending each.

Other, background information, including the hospital's current use of diet supplements, respondent's role in the contract decision process, years of experience, etc., were also collected for cross-tabulation with the conjoint results.

6. Running the PRIDEM program

Figure 2 shows a portion of the PRIDEM computer run for the problem described above. Illustratively, we input the base-case prices (as a control) and note, of course, the same market shares as originally read in (e.g., Alpha's share is 0.28). We also observe that the contribution to overhead and profit per patient day is $6.16 and $13.68 for Alpha and Beta, respectively.

6.1. Overall market analysis

We next consider alternative pricing strategies, conditioned on the non-price attributes remaining at their original (current) levels. Suppose we wish to find Alphas optimal price at base-case levels. We enter instructions accordingly and find from Figure 2 that its optimal price is $36.19, a decrease from its current level of $41. If Alpha were to reduce its price, with no retaliation from Beta, its share would increase ten percentage-points (from a share of 0.28 to 0.38). Overhead/profit contribution would increase from $6.16 to $6.53.

Next, we repeat the exercise for Beta, conditional on no change in Alpha price from its status quo of $41. In this case Betas optimum entails an increase in its price to $41 (which then happens to be at parity with Alpha). Beta's share would decline from 0.72 to 0.65 but its contribution would increase from $13.68 to $14.26.2

Next, we consider a unilateral strategic change by Beta - one that both improves its non-price attribute levels (from their current levels to their prospective levels) and decreases its price from the original $38 level to $34. Given no retaliation from Alpha, the net effect is to increase Beta's share to 0.868 and its contribution to $13.03.

What should Alpha do if Beta drops to $34 and improves its non-price attributes? Assuming that Alpha's only short-run retaliation is price, the PRIDEM model finds that it should lower price from its starting level of $41 to $34 (at parity with Beta's new price).

Next, we consider a case in which Alpha stays at $41 but Beta really drives down its price (to $28). Moreover, both firms improve their respective non-price attributes to level 2 (prospective levels). The net effect of these actions is that Beta's share markedly increases to 0.933 but its contribution drops substantially (to $8.39).

6.2. Selected segment analysis

At this point we elect to stay with the same non-price parameters as described immediately above. But now we focus on a specific market segmenting variable-type of respondent: nurses, doctors, and pharmacists. PRIDEM shows that the effects on Beta's share differ by segment: 0.892 (nurses), 0.921 (doctors), and 0.982 (pharmacists). Their weighted average is 0.933. as noted above for the total market analysis.


RUN PRIDEM
INPUT THE NUMBER OF PRODUCTS• Initial parameter inputs
2
INPUT THE NO. OF SEG. PROD 'PRICE. AND PRICE ATTRIBUTES
2 6 2
INPUT THE NO. OF LEVELS FOR SEGMENT ATTRIBUTES
32
INPUT THE NO. OF LEVELS FOR PRODUCT AND PRICE ATTRIBUTES
2 2 2 2 5 5
INDICATE THE FILE WITH THE SEGMENT WEIGHTS
PRIDEM.WTS
• Segment weights file
INDICATE THE FILE WITH THE PRICE LEVELS
PRIDEM.PRI
• File containing dollar price amounts
INPUT THE FILE NAME THAT DESCRIBES THE MODEL
ALPHA.INP
• Input parameters for Alpha
INPUT 1 FOR TUKEY OR FOR TABLE
2
INDICATE THE NUMBER OF INTERACTION TERMS
4
INPUT THE FILE NAME THAT DESCRIBES THE MODEL
BETA.INP
• Input parameters for Beta
INPUT 1 FOR TUKEY OR 2 FOR TABLE
2
INDICATE THE NUMBER OF INTERACTION TERMS
4
INPUT 1 IF THE PRICES ARE INCREASING. 0 IF DECREASING
0
INDICATE THE BASE-CASE MARKET SHARES
28 72
INDICATE THE BASE-CASE PROD. NON-PRICE ATT. LEVELS• Initial non-price attribute settings
1 1 1 1
INPUT THE BASE-CASE PRICES• Initial price settings
41 38
INPUT THE VARIABLE COSTS PER PRODUCT• Initial costs
19 19
INDICATE THE NEW-CASE NON-PRICE ATT. LEVELS• Base-case confirmation analysis
1 1 1 1
INPUT THE NEW-CASE PRODUCT PRICES
41 38
INPUT 1 FOR OVERALL, 2 FOR ATTRIBUTE, OR 3 FOR DETAILED ANALYSIS• Total market analysis for base case
1
THE MARKET SHARES ARE: 0.280 0.720
THE PROFIT RETURNS ARE: 6.16 13.68
INPUT 1 FOR AN OPTIMAL PRICE ANALYSIS, ELSE 0
• Shares
• Contributions to overhead/profit
1

INPUT THE PRODUCT• Optimal Alpha price conditioned on Beta's price
1
THE OPTIMAL PRICE -36.187
THE MARKET SHARES ARE: 0.38 0.62
THE PROFIT RETURNS ARE: 6.536 11 775
INPUT 1 FOR AN OPTIMAL PRICE ANALYSIS. ELSE 0
1

INPUT THE PRODUCT• Optimal Beta price conditioned on Alpha's price
2
THE OPTIMAL PRICE -41.000
THE MARKET SHARES ARE: 0.35 0.65
THE PROFIT RETURNS ARE: 7.743 14.257
INPUT 1 FOR AN OPTIMAL PRICE ANALYSIS. ELSE 0
0

Figure 2. Illustrative run of PRIDEM (Main-frame version)

Table 2


RoundPrice

AlphaBeta

0$41$38
1$36.19$38
2$36.19$39.83

6.3. Weighted segment analysis

To round out the discussion, we also consider a weighted segment analysis for both type of respondent and hospital size. Illustratively, we assign weights of 0.5, 0.4, and 0.1 to nurses, doctors, and pharmacists, respectively. We assign weights of 0.8 and 0.2 to large and small hospitals, respectively.

The net result of this parameter setting is a Beta share of 0.909 and an associated contribution of $8.18. These outputs are each lower than their total market counterparts.3

6.4. Dynamic changes

Up to this point, our PRIDEM illustration did not explore sequential competitive retaliation. It is, however, a simple matter to use the program in such a way that in Round 1 Alpha initiates action; in Round 2 Beta answers in some fashion, and so on.

By way of illustration, a sequence of actions was implemented, based on starting conditions of $41 (Alpha) and $38 (Beta) with all non-price attributes at their current levels. We assume that Alpha starts out as the price "leader," Beta follows suit, and so on (and each tries to optimize its contribution, conditional on the other's prices). For two such rounds, the results are as- can be seen in Table 2.

At the end of two rounds - initiation and response - the prices are $36.19 and 39.83. with shares (contributions) of 0.43 ($7.38) and 0.57 ($11.90) for Alpha and Beta, respectively. Of course, given the ability of Alpha and Beta to collude (if no external competitor were present and if total demand were completely inelastic), they could drive up the prices as much as they liked. (Hence, we do not consider further price changing rounds lor this example.)

The idea of an external competitor can he incorporated into the PRIDEM model by including a ( P + 1 )-st product with fixed prices and non-price attribute levels, and a starting share. Then, if Alpha and Beta tried to drive up their prices the external competitor would garner an increasing share of the market.

7. What have we learned?

How did the study's sponsor react to the PRIDEM model? As we have frequently found in applied studies using the model, the sponsor explored the possibilities for non-price attribute changes. In this example, Beta management added a price guarantee and educational aids. These non-price changes were accompanied by a Beta price increase to $41 (at parity with Alpha). As of six months after Beta's changes, Alpha had not retaliated with either non-price or price changes. While we are not privy to the financial consequences of Beta's strategy, to the best of our knowledge market share remained relatively stable over the six months' time period in question.

To date, the PRIDEM model has been used on several empirical applications, most frequently drawn from the pharmaceutical industry. The predictions made from the model have been, cross-checked, where possible, with time-series analyses of historical price changes. As is well known, analyses of such 'natural experiments' are fraught with difficulty. However, in the cases analyzed, the results have been roughly concordant with those obtained-from the model.

The main advantages of the PRIDEM model over that proposed by Mahajan, Green, and Goldberg (1982) are twofold. First, market segment responses can be estimated by means of main effects parameters and interactions with price and non-price attribute levels. Second, the present model solves for optimal prices, conditioned on fixed levels for price and non-price attributes of competitive products (using variable cost data estimated by the sponsoring firm's financial department). Moreover, use of the model, as a decision support system, is straightforwardly implemented by non-technical personnel on either a main-frame of personal computer.

There are several limitations to the model, as currently formulated and operationalized. First, the model deals with aggregated responses, where segment differences are measured by within- and between-set interactions. As Moore (1980) has illustrated, researcher-selected segmenting variables may not adequately capture full individual variation in attribute-level part worths. Second, the model does not make allowance for lack of respondent knowledge about actual prices; in some cases the model could overstate price sensitivity, since full comparative pricing information is shown to the respondents. Third, while the model can handle intra-product line pricing (in terms of within-firm competition), it does not consider joint production/distribution costs.

7.1. Action / reaction sequence

As briefly described earlier, PRIDEM enables the user to examine action/reaction sequences, albeit in a rather simple way that does not consider changes in buyer preferences or other kinds of new information that might be obtained between successive rounds of price changes.

Theoretical work by Hauser and Shugan (1983), Kumar and Sudharshan (1988) and Choi, DeSarbo, and Harker (1990) represent a very interesting topic to pursue in tandem with the measurement aspects of PRIDEM.

To date, management's reaction to PRIDEM's potential for formulating 'dynamic' (action/reaction) strategies has been less than enthusiastic. In our experience managers are much more concerned with the interplay of non-price and pricing strategies, with priority given to the former. Bearing in mind that competitive retaliation to non-price actions is typically more difficult and less immediate, this emphasis is understandable.

When managers do engage in action/reaction gaming, their interest usually does not extend to questions of long term equilibria but, rather, is focused on only two to three moves ahead. Again, we do not find these views naive and 'myopic'. Managers typically lack information regarding competitors' costs, motivations and intentions; moveover, they also face questionable assumptions regarding stability in buyers' perceptions and brand preferences over the time period under study.

7.2. Conclusions

These are important caveats and represent opportunities for further research. 4 Hence, we consider the model and its associated decision support system as an interim effort that can (and should) be expanded, consistent with making sure that future versions can be operationalizcd in terms of accessible buyer preferences and cost data. If we have learned anything from the development of PRIDEM, it is the important fact that useful models must pay due attention to the kinds of measurements and data inputs one hopes to be able to obtain from the environment (e.g., marketplace).

What has made PRIDEM work is the simple fact that conjoint data can be obtained in reasonably realistic ways from prospective buyers. Without this measurement linkage (and at the back end, a user-friendly computer system), PRIDEM could have easily joined the ranks of a large array of technically attractive models with few (or no) users.

Appendix

In this section we describe, in further detail, the spline fitting procedure that enabled us to interpolate between the discrete price points utilized in the experimental design.

A.1. Fitting onc-dimensioual splines

Let g be a function of one variable. Assume that we know the value of g at (i.e., at n + 1 knots). We want to interpolate to find g(x) smoothly; . We approximate g(x) by a p-dimensional polynomial in each interval , i = 1.....n. Note that the meaning of the variables here (e.g., p below) is different from that used in the main text. That is,

for

Let g(k) denote the k-th derivative of x. We know g() and g() for smoothness we assume that for k = 1,.... p - 1. This gives us p + 1 linear equations in p + 1 unknowns and hence ai0.....aip are determined. Specifically, let r1 = g(), r2 = g() and tk = , k = 1,..., p - 1. Let ai = ai0.....aip and

Then,

from which

All we need to specify exogenously is g(t),...., g(p-t).

Linear interpolation is a special case of the above. Solving for (at0, a??) yields

and

A.2. Fitting two-dimensional splines

Let g be a function of two variables. Assume that we know the value of g at the lattice of points (, ), i = 1, j = 1,...,n. We want to interpolate to find g( x, y ) smoothly for all x and y, and . We approximate g( x, y ) by a polynomial in each rectangle , . That is,

for and .

In a manner similar to the one-dimensional case, smoothness conditions and t00 = g(, ), t01 = g(, ), t10 = g(, ),and t11 = g(, ). In particular, let p = 1. Then g( x, y ) = . We then have four linear equations in four unknowns

t00 = ,

t01 = ,

t10 = ,

and

t11 = .

These four equations have the solution:

aij11 = ,

aij10 = ,

aij01 = ,

and

aij00 = .

References

Brodie, R., and De Kluyver, C.A. (1984), "Attraction versus linear and multiplicative market share models: An empirical evaluation", Journal of Marketing Research 21, 194-201.

Carroll, J.D., Green, P.E., and DeSarbo, W.S. (1979), "Optimizing the allocation of a fixed resource: A simple model and its experimental test", Journal of Marketing 43, 51-57.

Choi, S.C., DeSarbo, W.S., and Marker, P.T. (1990), "Product positioning under prior competition", Management Science 36, 175-199.

DeSarbo, W.S., Rao, V.R., Sieckel, J.H., Wind. J., and Columbo, R. (1987), "A friction model for describing and forecasting price changes", Marketing Science ?. 299-319.

Doyle, P., and Gidengil, B.Z. (1977), "A review of in-sture experiments", Journal of Retailing , 53, 47-62.

Green, P.E., and DeSarho, W.S. (1979), "Componemial segmentation in the analysis of consumer tradeoffs", Journal of Marketing 43, 83-91.

Green, P.E., Krieger, A.M., and Zelnio, R.N. (1988), "A componential segmentation model with optimal product design features", Decision Sciences 20, 22l-238.

Greville, T.N.E. (ed.) (1969), Theory and Application of Spline Functions, Academic Press, New York.

Hauser, J.R., and Shugan, S.M. (1983), "Defensive marketing strategies", Marketing Science 2, 319-360.

Jones, D.F. (1975), "A survey technique to measure demand under various pricing strategies", Journal of Marketing 39, 75-77.

Kumar, K.R., and Sudharshan, D. (19S8), "Defensive marketing strategies: An equilibrium analysis based on decoupled response functions", Management Science 34, 805-815.

Louviere, J., and Woodworth, G. (1983), "Design and analysis of simulated consumer choice or allocation experiments: An approach based on aggregate data", Journal of Marketing Research 20, 350-367.

Mahajan, V., Green, P.E., and Goldberg, S.M. (1982), "A conjoint model for measuring self- and cross-price/demand relationships", Journal of Marketing Research 19, 334-342.

Monroe, K.B. and Della Bata, A.J. (1978), "Models for pricing decisions", Journal of Marketing Research 15, 413-428.

Moore, W. (1980), "Level of aggregation in conjoint analysis: An empirical comparison", Journal of Marketing Research 1?, 516-523.

Nagle, T. (1984), "Economic foundations for pricing", Journal of Business 57, 83-52?.

Pessemier, E.A. (1960), "An experimental method for estimating demand", Journal of Business 33, 373-383.

Rao, V.R. (1984), "Pricing research in marketing: The state of the art", Journal of Business 57, 839-860.

Rice, J.R. (1969), The Approximations of Functions, Vol. 2, Addison-Wesley, Reading, MA.

Robinson, B., and Lakhani, C. (1975), "Dynamic price models for new product planning", Management Science 21, 1113-1122.

Silk, A.J., and Urban, G.L. (1978), "Pre-test-market evaluation of new packaged goods", Journal of Marketing Research 15, 171-191.

Theil, H. (1969), "A multinomial extension of the linear logit model", International Economics Review 10, 251-259.

Wittink, D.R. (1977), "Exploring territorial differences in the relationship between marketing variables", Journal of Marketing Research 14, 145-155.

Research Article

Multi-objective airport gate assignment problem in planning and operations

Authors

  • V. Prem Kumar,

    Corresponding author
    1. Transport and Mobility Laboratory (TRANSP-OR), School of Architecture, Civil and Environmental Engineering (ENAC), École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
    • Correspondence to: Prem Kumar Viswanathan, Transport and Mobility Laboratory (TRANSP-OR), School of Architecture, Civil and Environmental Engineering (ENAC), École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. E-mail: prem.viswanathan@epfl.ch

    Search for more papers by this author
    • A major part of the work was performed while the author was affiliated to Symphony Marketing Solutions, Bangalore, India.

  • Michel Bierlaire

    1. Transport and Mobility Laboratory (TRANSP-OR), School of Architecture, Civil and Environmental Engineering (ENAC), École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
    Search for more papers by this author

SUMMARY

We consider the assignment of gates to arriving and departing flights at a large hub airport. This problem is highly complex even in planning stage when all flight arrivals and departures are assumed to be known precisely in advance. There are various considerations that are involved while assigning gates to incoming and outgoing flights (such a flight pair for the same aircraft is called a turn) at an airport. Different gates have restrictions, such as adjacency, last-in first-out gates and towing requirements, which are known from the structure and layout of the airport. Some of the cost components in the objective function of the basic assignment model include notional penalty for not being able to assign a gate to an aircraft, penalty for the cost of towing an aircraft with a long layover, and penalty for not assigning preferred gates to certain turns.

One of the major contributions of this paper is to provide mathematical model for all these complex constraints that are observed at a real airport. Further, we study the problem in both planning and operations modes simultaneously, and such an attempt is, perhaps, unique and unprecedented. For planning mode, we sequentially introduce new additional objectives to our gate assignment problem that have not been studied in the literature so far—(i) maximization of passenger connection revenues, (ii) minimization of zone usage costs, and (iii) maximization of gate plan robustness—and include them to the model along with the relevant constraints. For operations mode, the main objectives studied in this paper are recovery of schedule by minimizing schedule variations and maintaining feasibility by minimal retiming in the event of major disruptions. Additionally, the operations mode models must have very, very short run times of the order of a few seconds.

These models are then applied to a functional airline at one of its most congested hubs. Implementation is carried out using Optimization Programming Language, and computational results for actual data sets are reported. For the planning mode, analyst perception of weights for the different objectives in the multi-objective model is used wherever actual dollar value of the objective coefficient is not available. The results are also reported for large, reasonable changes in objective function coefficients. For the operations mode, flight delays are simulated, and the performance of the model is studied. The final results indicate that it is possible to apply this model to even large real-life problems instances to optimality within short run times with clever formulation of conventional continuous time assignment model. Copyright © 2013 John Wiley & Sons, Ltd.

1 INTRODUCTION

The airline industry has long been a fertile area for applying optimization techniques. This paper describes the airport gate assignment problem (GAP) as experienced by congested hub airports and large airline companies. Airport gates are restricted resources and are used by incoming and outgoing flights to park the aircraft, disembark the passengers of the incoming flights, and board the passengers of the outgoing flights.

There are some subtle differences in the airport GAP encountered at different airports across the world. For instance, in many European and Asian airports, it is normal to assign an aircraft to a remote bay (also referred to as apron or stands), far away from the airport terminal, and then the passengers are disembarked and boarded using shuttle busses, in the absence of an available gate. However, in USA, remote bays are not allowed, and all passengers are mandatorily required by law to be disembarked and boarded through an airport gate. This makes the problem highly restricted as well as complex because there have been instances, especially during major disruptions, when the aircrafts are required to wait on the tarmac for several hours to disembark the passengers.

There are some other differing features in the operations of airports across the world. In USA, airport gates are resources that are owned or leased by a particular airline company for a specific period based on a medium-term to long-term contract. In case an airline falls short of gates, it will either have to negotiate and sublease gates from a competitor or downsize its operations at that airport. On the contrary, airport gates are largely managed by the airport authorities in Asia and Europe. Thus, in the USA, the onus of efficient ground operations lies entirely with the concerned airline.

The period that an aircraft spends on the ground between incoming (also referred to as arriving) and outgoing (also referred to as departing) flights is called a turn. In the rest of the paper, we use the term “turn” to refer to a flight combination associated with the same aircraft. For every turn, the aircraft is assigned to a gate, and the same gate is utilized by many aircrafts in the course of a day. Airport operations team develops gate assignment plans by using an optimization model that assigns gates to every turn, while balancing operational constraints, given the fleet and turn information through a station. Each hub airport must have a gate plan based on its geography and layout.

Although the different optimization criteria of the problem considered by us in this paper are explained in detail in section 3, we will now present some features and restrictions of the problem that have been observed at our study airports. These features and constraints are not airport specific but have wider applications to all airports.

  • Adjacency constraints: Adjacency constraint is described as a situation when two large aircrafts cannot be accommodated in adjacent (near-by) gates because of structural or space limitation. Thus, when gate A is occupied by aircraft type 1, the adjacent gate B does not allow aircraft type 2 and vice versa. An example would be a situation with gate B11 that has a B747 parked on it. At the same time, gate B12 cannot accommodate a wide bodied aircraft, such as B747, B767, B777, or an A340. Please refer to Figure 1 for further details. This constraint is observed at almost all major airports in the world and has been widely studied by the researchers as well.
  • Last-in first-out (LIFO) gates: LIFO gates are observed in a situation where two gates are one behind the other as shown in Figure 2. Thus, if the gate #1 is occupied by an aircraft, then the aircraft in gate #2 cannot depart, or the gate #2 cannot be used to accommodate an incoming aircraft during occupancy of gate #1, even if it is free. This constraint is neither widely considered nor extensively studied by the researchers.
  • Towing: Towing means that an aircraft is towed away after it arrives at a gate and the passengers are allowed to disembark. It will then be towed out and brought back to a gate for departure. The departure gate may or may not be the same as the arrival gate. The purpose of towing is to free up a gate for other turns' use. So, it is only worthwhile to tow turns with a long duration, that is, a long turn time; say more than 2 hours or so. Further, every time an aircraft is towed away or towed into a gate, there is a cost associated with it. Thus, it is imperative to minimize this cost. A towing operation is illustrated in Figure 3.

Conceptually, a long turn that is towed out or towed in is broken down to two separate turns. An arriving flight is combined with a dummy outgoing flight after providing adequate time for passenger disembarkment. Similarly, a departing flight is combined with a dummy incoming flight, providing adequate boarding time.

  • Gate rest: Gate rest is defined as the duration for which the gate is kept idle between a departing flight and the next arriving flight. The purpose of gate rest is to ensure that the gate plan remains fairly robust in the event of minor delays in the flight schedule. A gate rest of 10 minutes ensures that two successive flights are assigned the same gate if and only if the arrival time of the later flight is scheduled at least 10 minutes after the departure time of the former in the planning phase.
  • Preferred gates: Although some gates are perceived to be favorable for certain turns, some others may not be perceived to be favorable. Certain pre-determined sets of conveniently located gates are preferred to be assigned to business markets and premium service flights. It is also preferred to assign international flights to international gates and domestic flights to domestic gates, even though it is possible to disembark and board passengers otherwise. Similarly, certain gates may be able to technically handle a particular type of aircraft, without being a preferred assignment. Assignment of turns to preferred gates are maximized.
  • Unassigned turns: Given that airline companies have peak activities over a small window of period during morning and evening, it is possible that an airline does not have adequate number of gates for all the aircrafts parked on the ground. Under such situations, either some aircrafts are made to wait till some gate is freed or the airline company borrows a gate from its competitors. Both of these are usually not preferred. Whereas the first one impacts the customer satisfaction levels, the second one involves a certain cost and is subject to availability.

We have explained so far some of the main features and restrictions relevant to an airport GAP. This paper is organized in the following manner. In the next section, viz. section 2, we review some of the relevant literature on this topic. Section 3 describes the different objectives considered for our problem. We outline the existing gaps in the literature on this subject and describe our contribution in plugging those gaps. Section 4 describes the mathematical model developed by us to solve this problem, and section 5 illustrates the results of the model on a real-life example. Section 6 concludes the paper and provides some future directions for research on this topic.

2 LITERATURE REVIEW

Airport GAP, as a planning problem, has been extensively studied for three decades. Here, we will focus our survey to some of the recent work in this field, without, of course, omitting the pioneering research. Because airport GAP is a special case of generalized assignment problem with specific constraints, its complexity is similar. Mathematical modeling for this problem has also been generally inspired from the modeling techniques for assignment problem. One of the major classifications of the research on airport GAPs is along the lines of modeling methodology, viz. continuous time model and discrete time interval model. Dorndorf et al. [13] provide a survey of the state of the art on the airport gate assignment research.

One of the earliest papers reporting the assignment of gates with the objective of minimizing average passenger walking distance (for both departing and arriving flights) is modeled using a continuous time assignment model by Babic et al. [1]. The model assigns aircrafts to gates or stands and ensures that larger number of passengers walk less, while ensuring that all flights are assigned to a gate or stand. Mangoubi and Mathaisel [2] also present a continuous time gate assignment model that optimizes the average passenger walking distance into and out of the terminal. Their model, additionally, also looks at aircraft-gate compatibility and the connecting passengers to model the assignment of two flights to gates such that the distance between them is kept within a certain limit. However, the model emerges as a quadratic assignment model, which is linearized in a not very efficient manner. Thus, they employ Linear Programming (LP) relaxation and greedy heuristics to solve the problem.

Haghani and Chen [3] formulate a multiple time slot version of the GAP with the objective of passenger walking and baggage transport distance minimization as an integer program. The problem is formulated as a quadratic assignment problem and is solved using an iterative heuristic. Yan and Chang [4] formulated the airport gate assignment as a multi-commodity network flow problem. The objective of this model is to “flow” all the airplanes in each network, at a minimum cost, which is equivalent to the minimization of total passenger walking distance. An algorithm based on the Lagrangian relaxation, with subgradient methods, accompanied by a shortest path algorithm and a Lagrangian heuristic was developed to solve the problem. The model was tested using data from a Taiwanese airport.

Xu and Bailey [6] propose a tabu search algorithm for a continuous time airport GAP with the objective of minimizing the passenger walking distances, to reach the connecting flights. Given that the problem has a non-linear (quadratic) objective function, a simple tabu search meta-heuristic is used to solve the problem. The algorithm exploits the special properties of different types of neighborhood moves and creates effective candidate list strategies.

Recent research has been concentrating on robust gate assignment plans without focusing on walking time minimization criteria. Bolat [5] provides a model for robust gate assignment, which can be maintained during the real-time operations. This is carried out by maximizing the gap between a departing flight and the next arriving flight. However, the objective function is non-linear, and hence, the problem is solved using a heuristic. Yan et al. [8] introduce flexible buffer times to absorb stochastic delays in gate assignment operations. They propose a simulation framework that is not only able to analyze the effects of stochastic flight delays on static gate assignments but can also evaluate flexible buffer times and real-time gate assignment rules. Lim et al. [11] consider the more realistic situation where flight arrival and departure times can change. Although the objective is still to minimize walking distances (or travel time), the model considers time slots allotted to aircraft at gates deviate from scheduled slots within a time window. The solution approach uses insert and interval exchange moves together with a time shift algorithm. These neighborhood moves are used within a tabu search framework.

More recent research on this topic focuses on multiple objectives and other special ways of mathematical formulation. There have also been efforts to combine the problem in planning and operations phase to develop stochastic models. Yan and Huo [7] formulate a dual objective 0–1 integer programming model for the aircraft-gate assignment. The first objective tries to minimize walking times for all passengers, whereas the second objective aims to minimize passenger waiting times in the event of the aircraft not being able to find a free gate. Ding et al. [9] consider the over-constrained GAP, which is described as a situation when there are too many flights for the available gates. They propose a 0–1 quadratic program model that minimizes the number of ungated turns and also minimizes the passenger walking distance. They use a greedy algorithm that minimizes the ungated flights, although a neighborhood search technique called the Interval Exchange Move allows flexibility in seeking good solutions within a tabu search framework.

Lim and Wang [10] attempt to accurately build an evaluation criteria for the ability of an aircraft-to-gate assignment to handle uncertainty on aircraft schedule, and to accurately and effectively search the most robust airport gate assignment. They develop a stochastic programming model and transform it into a binary programming model by introducing the unsupervised estimation functions without knowing any information on the real-time arrival and departure time of aircrafts in advance. A partition-based search space encoding, two neighborhood operators for single or multiple aircrafts reassignment, and a hybrid meta-heuristic combining a tabu search and a local search are implemented.

Yan and Tang [15] consider the GAP in the planning mode along with stochastic flight delays that occur in actual operations. They argue that it would be sub-optimal to handle the problem in planning and operations separately, without addressing the inter-relationship between these two stages. They suggest a heuristic approach to solve such a model that includes three components, a stochastic gate assignment model, a real-time assignment rule, and penalty adjustment methods. Diepen et al. [12] propose a set partitioning formulation involving modeling on a series of flights that are to be assigned to the same gate. This assignment is called a gate plan. A major advantage of this new formulation is that feasibility can be checked easily during the pre-processing stage. Furthermore, even cost calculation of a gate plan is also pre-processed. This is also one of the few papers that consider the adjacent gate restriction as observed at Schipol airport.

Dorndorf et al. [14] propose two methods to incorporate robustness into the gate assignment models through overlap methods and fuzzy sets. Dorndorf et al. [16] consider the multiple objectives of maximization of the total assignment preference score, minimization of the number of unassigned flights during overload periods, minimization of the number of tows, and also maximization of the robustness of the resulting schedule with respect to flight delays. However, they present a unique approach involving simple transformation of the flight–gate scheduling problem to a graph problem, that is, the clique partitioning problem. The algorithm used to solve the clique partitioning problem is a heuristic based on the ejection chain algorithm.

Drexl and Nikulin [17] consider the multiple objectives of minimizing the number of ungated flights and the total passenger walking distances or connection times as well as maximization of the total gate assignment preferences. The problem is formulated as a quadratic assignment formulation and solved by Pareto simulated annealing to obtain a representative approximation for the Pareto front. Hu and Di Paolo [18] employ genetic algorithm to solve the multi-objective airport GAP.

To summarize, airport gate assignment in planning mode is an extensively researched topic over the last few decades. Although the early approaches considered one objective (usually minimization of passenger walking times) and formulated the problem as an integer or quadratic assignment formulation with continuous time slots, the researchers in late 1990s started to look beyond the continuous time formulation and proposed discrete time interval and network formulations. With the advent of 21st century, the problem has been formulated with a fresh perspective, such as set partitioning approach or a clique partitioning graph model, and the focus shifted to several other objectives that are commonly observed for this problem. The objectives considered for the problem have usually been to

  1. minimize passenger walking times from (or to) the terminal and connecting flight gates,
  2. minimize the number of ungated turns,
  3. minimize the number or costs of towing procedures,
  4. maximize (or minimize) the preference of certain turns to be assigned to favorable (or unfavorable) gates,
  5. maximize the schedule robustness using different features such as inclusion of stochastic delays in the model, increase the idle time between two turns, or include time windows.

Airport gate assignment in operations mode is studied as a recovery problem in conjunction with aircraft, crew, and passenger recovery problems. However, operational objectives before recovery, such as minimizing gate plan deviations and maintaining schedule feasibility as specific objectives in the operations are not studied.

3 PROBLEM FEATURES AND ASSUMPTIONS

The airport GAP studied by us in this paper is inspired from a real-life case study at Chicago O'Hare, even though not all the restrictions reported in section 1 are applicable for this airport. It is indeed an interesting research challenge to study the different ways of formulating the airport GAP in the planning mode. However, it is surprising to note that one common thread that runs across the literature is the fact that the problem has rarely been solved using exact methods to optimality. In fact, we would go on to state that heuristic and meta-heuristic techniques have been “over-employed” to solve the problem. The probable reason is that the scale of the problem is often so large that it is not possible to solve it to reasonable or acceptable levels of optimality with the mathematical formulations themselves. Given that the airline company for which we solve the GAP is among the top 3 in terms of passenger traffic in the world and Chicago O'Hare is one of its busiest hubs, we want to show that simple, continuous time assignment model with a wide range of multiple objectives can produce high quality solutions in reasonable computing times.

Some of the considerations observed at actual airports are not reported in the literature. Some features unique to the GAP studied by us include LIFO gates and towing constraints. These constraints have never been explicitly modeled for either continuous time assignment models or discrete time slot assignment model in any of the papers studied by us. One of the reasons may be that most of the solution procedures have eventually relied on heuristics and they do not see the need for modeling these features as additional constraints. Diepen et al. [12] do consider the adjacent gate constraint, but the constraint is pre-processed in their set partitioning formulation. One of our major contributions in this paper is to conceptually model all these physical constraints, viz. adjacent gate constraints, LIFO gate constraints, and towing constraints, into logical mathematical ones.

Walking time minimization is one of the earliest objectives considered for the GAP in the literature. This problem can be easily modeled as a quadratic programming model. Although some of the available literature attempts to linearize the non-linear model, most of the other papers solve the problem using heuristics—where the non-linear objective or constraints really do not matter. We believe that walking time minimization is a realistic objective for the airport GAP; it, however, is not perceived as a priority from the business point of view. Most airline companies like to know how walking time criteria can really impact their bottom line apart from perceived customer satisfaction. In this context, the walking time criteria are really the most important criteria for connecting passengers—with especially short connection times.

A passenger with 2-hour connecting time would not bother to walk a little more distance to her gate (which also gives an opportunity for shopping). However, a passenger with 35-minute connection time would be particularly bothered as she has to disembark, walk to the connecting flight gate, and board the flight within this short time. In particular, airline also realizes that if the passenger misses the connection, they may not only have to make alternative arrangements for the passenger but also not realize the complete revenue until the journey is completed. Such connections are identified as “connections at risk.” and it is important for the airline to make suitable arrangements-such as making these passengers sit on one of the front rows to enable faster disembarking, assigning the connecting flight to a “nearby” gate, and so on. In this context, our work differs from the other research works in this area-even though the underlying model is conceptually the same. Our first objective is not to minimize the average walking distances but to maximize the connection realizations through our optimal gate assignment model, which we consider as a fresh contribution on this topic.

The second objective considered by us is to limit the number of zone usages to minimum when the hub activity is thin. Although it is difficult to reduce the number of zones during the day times, it is however possible to limit the number of zones during the night times. This objective has not been considered in the airport gate assignment so far in the literature, and it is indeed one of the contributions of our paper.

Robust scheduling has been adequately addressed in the prior works as one of the objectives of the airport gate assignment model. Robustness, as a measure, can have different definitions. In our paper, we already provide some gate rest between a departing turn and the next arriving turn. This is also referred to as idle time in the literature. The purpose of this gate rest is to absorb small delays in the outgoing or incoming aircrafts. Another purpose of providing gate rest is also to ensure safety by providing a reasonable separation. This would ensure that there is sufficient gap between departing and arriving aircraft to minimize the possibility of any accidents. Usually, standard gate rest is provided for all turns depending on the type of aircraft equipment. We propose to increase the robustness of our planned schedule by increasing the gate rest by accounting for the past history of delays. The method followed is quite simple and intuitive. We note the past delay patterns for every flight. We choose the kth percentile delay of the historical delay in minutes (say, k is 95th percentile of past 300 days of delays) for every turn and attempt to add the same to the gate rest corresponding to that turn. Because the delays are themselves calculated at flight levels, we choose the maximum of the two delays-arriving and departing flight of a turn-to calculate the percentile delay. The key word here is “attempt to” because it may not be feasible to provide such a gate rest. For every minute of violation of this additional gate rest, there would be an associated penalty. Although idle time maximization is one of the measures used extensively in the literature to increase the schedule robustness, we feel that our measure is far more effective as it tries to maximize the time between turns to the turns where it is needed the most-instead of adding time to all the turns. We feel that this is another major contribution of this paper.

It is worth highlighting that the recovery procedures considered for the airport GAP in the literature largely focus on the joint schedule recovery for the aircrafts, crew, and passengers. Although it is best to study the recovery models encompassing all functions of the airline business, it is often elaborate and time consuming and helps little at the actual time of “fire-fighting” on the ground when a number of aircrafts arrive with large delays. Before the schedule is recovered, it is imperative that the arriving and departing aircrafts are provided airport gates subject to the given set of operational constraints. This period between the disruption and schedule recovery is handled by ground operations and calls for certain robust and quick gate assignment models. In operations mode, the planning objectives that relate to the profitability are ignored. The primary focus is on schedule feasibility and ensuring that there is minimal further disruption to the flight schedule. The second important feature of operations model is the run time. Small run times are ideal, and it is critical that the operations mode models indeed have very small run times, say, a few seconds. It is not possible to wait, say, even for half an hour, for the model to produce an optimal output because the ground staff literally fights against time while managing disrupted flights.

In this context, it is relevant to note that operations models in gate optimization have two main objectives. The first basic objective is to minimize the deviation from the planned schedule. In the event of delayed arrival and subsequent departure of a large number of turns, it is imperative that the assignment to the originally planned gates may no longer be possible. Given the fresh arrival and departure times for the turns, the first objective aims to minimize the penalty due to reassignment of gates (re-gating). In addition to allowing flight re-gating, the operations model also aims to maintain a similar number of departures and arrivals for every zone for given pre-fixed time intervals. Any deviation in the number of aircrafts in a particular zone on ground with respect to the planned schedule is penalized. All the physical and logical constraints of the airport as mentioned in section 1 would be applicable in the operations problem as well. Further, the operations model with this objective should be capable of handling constraints relating to the following: (i) specific turns that should not be re-gated and/or (ii) specific turns that are allowed to be re-gated.

While running the gate assignment model in the operations mode with the previously mentioned objective, it is quite likely that a scenario emerges when the number of aircrafts on ground is actually more than the available gates. This could be a result of several delayed arrivals or departures piling up just around the hub peak time. Thus, the second objective in the operations mode deals with maintaining the schedule feasibility by retiming some of the flights to a later time. Care is taken to ensure that the extent of retiming is limited for an individual turn while ensuring that such re-timings are minimal and heavily penalized.

So far, we mentioned different objectives and constraints considered in our version of the airport GAP. We have also highlighted the distinctive features in our model, which may be fresh contributions on this topic. However, there are also some inherent assumptions and limitations in our model that are described as follows:

  • Connection revenue is realized only if the passenger is able to disembark, walk between the gates, and board before the departure of connecting flights. This is a fair assumption and a perfect synopsis of the real instance. However, it does not consider any additional costs that would be borne if the passenger misses the connection.
  • Disembarking time, walking time between different gates at the airport, and boarding times are provided as point estimate inputs. This is a strong assumption because different passengers may actually take different amounts of time for the same sequence of activities. It is especially true for passengers on wheel chairs and passengers traveling with families. However, the ensuing model would become extremely complicated if pedestrian behavior is to be included in it.
  • Connection revenue is provided as point estimate inputs. This is again a strong assumption as the connection revenue might tend to change on a day-to-day basis. The estimates used in the model would be based on average values over a fairly long period because it would not be possible to change gates for turns on a daily basis.
  • For schedule robustness, gate rest accounts for minimum gate rest and a certain percentile delay of the flights in the turns. Although this increases the schedule robustness by some extent, it is certainly not the best way. It would be ideal to model the stochastic flight delays into the planning model, but the resulting stochastic mixed integer program would be too complex to handle.
  • For the zone minimization objective, it is conveniently assumed that the non-productive time for the employees to travel within the zone would be same for all the gates within the zone and would necessarily be less than the travel time to cross the zone. This is a reasonable assumption based on the actual layout of the airport where the zones are fairly spread out and inter-zonal distances are usually much more than the intra-zonal distances.
  • Some of the papers in the literature allow for aircraft waiting in the times of congestion for the planning problem. However, this has not been taken into account for our problem as per the wishes of the airline company. It is fairly reasonable because the gate assignment, at least during the planning stage, should not plan for aircraft waiting. This might eventually happen during the real-time operations (when retiming certain turns beyond their actual arrival), but it would indeed be a bad plan to allow aircraft waiting in the absence of gates. In case a gate plan is highly infeasible, the airport operations team would work with the flight scheduling team to move some of the flights to non-peak periods while negotiating for more gates with the airport authority and the competitors.

We now describe the mathematical model, which is a 0–1 mixed integer program that helps us produce a feasible gate plan in the light of all the previous business constraints.

4 MATHEMATICAL MODEL

In this paper, we first consider the gate assignment for planning mode where cost minimization and revenue maximization are major optimization criteria as opposed to feasibility of solutions or walking times for passengers. In the planning mode, flight schedule and gate plan are used to arrive at a gate assignment schedule while ensuring that the business constraints are satisfied and the objective function obtains an optimal value. We develop a basic 0–1 integer program mathematical model formulation with linear objective function and constraints that would assign one gate to every flight and ensure that all business constraints are satisfied.

We would focus on cost minimization as the objective of this model. The different cost components in the model relate to the cost of providing unfavorable gates to preferred flights, the cost of towing, and the cost of an ungated turn. In this model, we would also describe all business constraints associated with the problem. Additional objectives of connection revenue maximization and minimization of zone usage cost and robust gate plan would be described later. Although most of the parameters and decision variables in the problem are described with the first objective, specific parameters and decision variables corresponding to the other objectives will be described later.

The following are the data sets for the turn schedule and the airport.

Sets:
TURNS:

set of turns to be gated represented as i, or j

LTURNS:

set of long turns for which towing is allowed, represented as t, LTURNS ⊂ TURNS

GATES:

set of gates represented as k or l

ADJACENT:

set of adjacent gate pairs that have the adjacent gate restriction represented as (k,l)

LIFO:

set of last-in first-out gate pairs represented as (kF,lR) to distinguish front and rear gates

(i1,

i2): new turns arising out of a towed turn t

Ek:

set of equipment (aircraft) types that gate k can handle, k ∈ GATES

,

: sets of equipment (aircraft) types in is occupying k, no aircraft of any type in may use gate l, where (k,l) ∈ ADJACENT 

and

 vice versa.

Parameters:
α:

minimum gate rest

C1:

the actual cost of towing an aircraft

C2:

the cost of not assigning a gate to a turn, determined by cost of borrowing a gate from a competing airline

ai:

planned arrival time of turn i ∈ TURNS

bi:

planned departure time of turn i ∈ TURNS

Cik:

notional cost of assigning turn i to gate k

ei:

equipment (aircraft) type used by turn i ∈ TURNS

Decision Variables:
xik ∈

{0,1}: 1 if turn i is assigned to gate k; 0 otherwise

yi ∈

{0,1}: 1 if turn i is not assigned to any gate; 0 otherwise

wt ∈

{0,1}: 1 if long turn t is towed; 0 otherwise

  • Objective Function:We start by minimizing all the cost elements in our model-which in this case are the cost of assigning unfavorable gates to certain preferred, premium service turns; cost of towing a turn; and the cost of not assigning a turn to a gate (ungated turn). It may be noted that the costs of assigning a gate are symbolic based on the perceived importance of certain flights; the cost of towing and the cost of ungated turn are on an actual basis.Cik represents how unfavorable would be the assignment of turn i to gate k. This coefficient, usually positive, is affected by a number of business and operational preferences:
    • Some predefined sets of conveniently located gates are preferred for turns that contain the top business market flights and premium services flights.
    • Some international terminal gates are capable of accommodating domestic arrivals, but they are less preferred than gates at domestic terminals.
    • Some gates are less preferred for some fleet types because of gate features.
    (1a)
  • Constraints:A turn has to be assigned to exactly one gate or none. It is also required that the assigned gate is capable of handling the aircraft associated with the turn. This is modeled as
    (2)
    It is possible that the airline company wants a particular turn i′ to be only assigned to a certain gate k. This can be modeled as
    (3)
    It is possible that the airline does NOT wants a particular turn j′ to be assigned to a certain gate l. This can be modeled as
    (4)
    There cannot be two turns on the same gate at the same time, including the gate rest time after the turn has departed. This is referred to as overlap and can be modeled as
    (5)
    Two adjacent gates cannot handle certain types of aircraft types simultaneously. This can be logically modeled as
    (6)
    The following two constraints ensure that no aircraft can exit the front gate as long as there is an aircraft in the rear gate and that no aircraft can enter the front gate as long as the aircraft in the rear gate has not departed.
    (7)
    (8)

We now introduce the following constraints to represent the towing of a long turn. Note that long turn t is broken down into two possible half turns i1 and i2, such that arrival time of i1 is same as the arrival time of t and departure time of i2 is same as the departure time of t. The first constraint ensures that the long turn is split only if the option of long turn is chosen by paying the towing costs and the two half turns are assigned to different gates. The next constraint ensures that there is no overlapping for the two half turns arising out of breaking a long turn. The last constraint ensures that there are no adjacent gate limitations for the half turns.

(11)
(12)
(13)

This model with objective as (1a) and constraints (2)–(13) would minimize the overall operational costs while satisfying all business-related constraints. This mathematical formulation solves the gate assignment model for all the complex constraints, although the objective function is kept trivial. Although the model can be enriched with the introduction of additional objectives, the constraint sets (2)–(13) would always be needed to produce feasible solutions. As a result, we will refer to this formulation as base model in the rest of the paper.

4.1 Planning mode

The previous model determines least cost feasible gate assignment that satisfies all the logical and business constraints. However, this feasible assignment is not sufficient in real life where decision-makers are often faced with several additional multiple objectives that include maximization of passenger connection revenue, minimization of zone usage cost, and maximization of schedule robustness. We will now present the different objectives that are considered by us in the planning mode of the airport GAP, starting the objective of revenue maximization.

4.1.1 Maximization of passenger connection revenue

In this section, we will extend the base model to maximize the passenger revenues by optimizing the connection time for “connections at risk.”

Although there are stipulated minimum and maximum connection times, it must be noted that flights gated at distant gates could possibly result in misconnections if the connection time is fairly tight. For a passenger who transfers to a connection flight at the airport, the connection time is readily defined as the walking time required from the arrival gate of her incoming flight to the departure gate of her outgoing flight. For the planning model, the arrival time of the incoming flight and the departure time of the outgoing flight are assumed to be fixed as per the published flight schedule. Each flight must be assigned to exactly one gate, and there should be sufficient time for passengers boarding at the gate.

When building a 0–1 integer program formulation, one of the key issues is the choice of decision variable. We consider the gating plan of an incoming flight connection as well as an outgoing flight connection rolled into one variable. Thus, for a flight schedule with 800 flights and 100 gates, the worst case scenario could result in 6.4bn 0–1 variables. Fortunately, every flight does not always present a connection opportunity to passenger with every other flight. Incidentally, a flight can potentially connect to barely 20 other flights, and, in most cases, the connection time is often more than the longest walking time between two gates at the airport, which means that there are few “connections at risk.”

We now present an extension to the model given previously (1a)–(13) to incorporate this objective in the mathematical formulation. As before, we will first define the additional sets and parameters and then the additional decision variables, objective function, and constraints.

Sets:
CNX:

set of all revenue connections involving turns i and j, that is, (i,j)

Parameters:
REVENUEij:

revenue generated by connecting turn i to turn j. Note that (i,j) ∈ CNX

Walk(k,l):

wholesome walking time including boarding, de-boarding, and other components of time to move from gate k to gate l

Decision Variables:
zijkl:

1 if turn i is assigned to gate k and turn j is assigned to gate l and (i,j) ∈ CNX, 0 otherwise

  • Mathematical Model:
    • Maximize
      (1b)
      (15)

0 Thoughts to “Atr 01 Assignment Satisfaction

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *