Detached Random Variable

A detached random variable is one that can assume only a finite, or countably infinite, number of distinct values.

From: The Joy of Finite Mathematics , 2016

Basic Concepts in Probability

Oliver C. Ibe , in Markov Processes for Stochastic Modeling (Second Edition), 2013

1.2.3 Continuous Random Variables

Discrete random variables take a gear up of possible values that are either finite or countably infinite. However, there exists another grouping of random variables that tin can assume an uncountable fix of possible values. Such random variables are chosen continuous random variables. Thus, nosotros define a random variable X to be a continuous random variable if there exists a nonnegative office f Ten (x), defined for all real ten∈(−∞,∞), having the property that for any set A of existent numbers,

P [ X A ] = A f 10 ( x ) d 10

The function f X (x) is chosen the probability density part (PDF) of the random variable 10 and is divers by

f X ( x ) = d F X ( x ) d x

This ways that

F 10 ( x ) = x f Ten ( u ) d u

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124077959000013

Detached Lifetime Models

N. Unnikrishnan Nair , ... N. Balakrishnan , in Reliability Modelling and Assay in Discrete Fourth dimension, 2018

Geeta Distribution

A discrete random variable X is said to have Geeta distribution with parameters θ and α if

f ( x ) = 1 α 10 1 ( α ten 1 x ) θ x 1 ( 1 θ ) α x ten , ten = i , 2 , , 0 < θ < ane , 1 < α < θ 1 .

The distribution is L-shaped and unimodal with

μ = ( 1 θ ˆ ) ( ane α θ ) 1

and

μ 2 = ( α ane ) θ ( i θ ) ( 1 α θ ) 3 .

A recurrence relation is satisfied past the central moments of the form

μ r + 1 = θ μ d μ r d θ + r μ two μ r 1 , r = one , 2 , .

Estimation of the parameters tin be done by the method of moments or by the maximum likelihood method. Moment estimates are

μ ˆ = ten ¯  and α ˆ = S 2 10 ¯ ( x ¯ 1 ) S two x ¯ 2 ( 10 ¯ 1 )

with x ¯ and s 2 existence the sample hateful and variance. The maximum likelihood estimates are

μ ˆ M = x ¯

and α ˆ is iteratively obtained by solving the equation

( α i ) 10 ¯ α x ¯ 1 = exp [ 1 due north x ¯ x = 2 k i = 2 x x f 10 x i ]

with n = x = 1 k f x as the sample size and f x is the frequency of 10 = ane , 2 , , thou . For details, see Consul (1990). As the Geeta model is a member of the MPSD, the reliability properties can be readily obtained from those of the MPSD discussed in Section 3.2.2.

Read total chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780128019139000038

Probability Distributions of Interest

Northward. Balakrishnan , ... M.S Nikulin , in Chi-Squared Goodness of Fit Tests with Applications, 2013

8.1.iii Poisson distribution

The discrete random variable X follows the Poisson distribution with parameter λ > 0 , if

P { 10 = grand } = λ k thousand ! e - λ , k = 0 , 1 , ,

and we shall announce it past Ten P ( λ ) . It is easy to show that

E 10 = Var Ten = λ ,

then

Var X E X = 1 .

The distribution office of 10 is

(8.4) P { X yard } = k = 0 m λ k grand ! e - λ = 1 - I λ ( m + ane ) ,

where

I x ( f ) = i Γ ( f ) 0 x t f - 1 east - t dt , x > 0

is the incomplete gamma function. Often, for big values of λ , to compute (8.4), we can use a normal approximation

P { 10 m } = Φ m + 0.5 - λ λ + O one λ , λ .

Allow { Ten n } n = 1 exist a sequence of independent and identically distributed random variables following the same Bernoulli distribution with parameter p , 0 < p < one , with

P { X i = ane } = p , P { X i = 0 } = q = one - p .

Let

μ n = X 1 + + X due north , F n ( x ) = P μ n - np npq 10 , x R 1 .

Then, uniformly for 10 R ane , we have

F n ( x ) Φ ( 10 ) = i ii π - x e - t 2 / 2 dt , northward .

From this result, information technology follows that for big values of n,

P μ n - np npq x Φ ( 10 ) .

Oft this approximation is used with the so-called continuity correction given by

P μ northward - np + 0.5 npq x Φ ( ten ) .

We shall now describe the Poisson approximation to the binomial distribution. Allow { μ n } be a sequence of binomial random variables, μ n B ( north , p due north ) , 0 < p n < 1 , such that

np n λ if n and λ > 0 .

Then,

lim n P { μ n = 1000 | n , p northward } = λ m m ! eastward - λ .

In practice, this means that for "large" values of n and "pocket-size" values of p, we may approximate the binomial distribution B ( n , p ) past the Poisson distribution with parameter λ = np , that is,

P { μ northward = m | north , p } λ m m ! e - λ .

It is of involvement to note that (Hodges and Le Cam, 1960)

sup ten yard = 0 x due north thousand p m ( i - p ) n - yard - thou = 0 x λ grand m ! e - λ C n , where C 3 λ .

Hence, if the probability of success in Bernoulli trials is modest, and the number of trials is large, then the number of observed successes in the trials tin can be regarded as a random variable following the Poisson distribution.

Read full chapter

URL:

https://world wide web.sciencedirect.com/scientific discipline/commodity/pii/B9780123971944000089

Multiple Random Variables

Oliver C. Ibe , in Fundamentals of Applied Probability and Random Processes (Second Edition), 2014

Section 5.7 Covariance and Correlation Coefficient

5.xx

Two detached random variables Ten and Y take the joint PMF given by

p XY x y = 0 x = 1 , y = 0 1 three x = i , y = 1 1 3 x = 0 , y = 0 0 x = 0 , y = 1 0 x = ane , y = 0 1 3 x = one , y = 1

a.

Are X and Y independent?

b.

What is the covariance of X and Y?

5.21

Two events A and B are such that P A = 1 4 , P B | A = ane two and P A | B = 1 4 . Allow the random variable X exist divers such that X  =   1 if consequence A occurs and X  =   0 if event A does not occur. Similarly, let the random variable Y be defined such that Y  =   one if event B occurs and Y  =   0 if event B does not occur.

a.

Find East[X] and the variance of Ten

b.

Find E[Y] and the variance of Y.

c.

Notice ρ XY and determine whether or not X and Y are uncorrelated.

5.22

A fair die is tossed iii times. Let 10 be the random variable that denotes the number of ane'due south and let Y be the random variable that denotes the number of iii's. Find the correlation coefficient of X and Y.

Read full chapter

URL:

https://world wide web.sciencedirect.com/scientific discipline/article/pii/B9780128008522000055

Mathematical foundations

Xin-She Yang , in Introduction to Algorithms for Data Mining and Motorcar Learning, 2019

2.iv.1 Random variables

For a discrete random variable X with distinct values such as the number of cars passing through a junction, each value x i may occur with certain probability p ( x i ) . In other words, the probability varies and is associated with the respective random variable. Traditionally, an uppercase letter such as 10 is used to denote a random variable, whereas a lowercase letter of the alphabet such as x i represents its values. For example, if X means a money-flipping consequence, then 10 i = 0 (tail) or 1 (head). A probability part p ( x i ) is a function that assigns probabilities to all the detached values 10 i of the random variable Ten.

As an event must occur within a sample space, the requirement that all the probabilities must exist summed to one, which leads to

(2.33) i = i due north p ( x i ) = 1 .

For example, the outcomes of tossing a fair coin grade a sample space. The outcome of a head (H) is an outcome with probability P ( H ) = 1 / 2 , and the outcome of a tail (T) is likewise an event with probability P ( T ) = 1 / 2 . The sum of both probabilities should be 1, that is,

(2.34) P ( H ) + P ( T ) = 1 2 + 1 ii = i .

The cumulative probability function of X is defined by

(2.35) P ( X ten ) = x i < x p ( ten i ) .

Two main measures for a random variable 10 with given probability distribution p ( x ) are its mean and variance. The mean μ or expectation of East [ X ] is defined past

(2.36) μ E [ X ] < 10 > = x p ( x ) d 10

for a continuous distribution and the integration is inside the integration limits. If the random variable is discrete, then the integration becomes the weighted sum

(ii.37) E [ X ] = i x i p ( ten i ) .

The variance var [ 10 ] = σ 2 is the expectation value of the divergence squared, that is, E [ ( X μ ) 2 ] . We have

(2.38) σ ii var [ X ] = E [ ( 10 μ ) 2 ] = ( x μ ) 2 p ( x ) d x .

The square root of the variance σ = var [ Ten ] is called the standard divergence, which is simply σ.

The higher up definition of mean μ = E [ X ] is substantially the first moment if nosotros define the 1000thursday moment of a random variable X (with a probability density distribution p ( x ) ) by

(2.39) μ 1000 Due east [ X k ] = x one thousand p ( ten ) d 10 ( k = 1 , 2 , 3 , ) .

Similarly, we can define the kth central moment by

(two.40) ν 1000 Due east [ ( 10 Due east [ X ] ) k ] E [ ( X μ ) grand ] = ( x μ ) g p ( x ) d x ( k = 0 , ane , 2 , iii , ) ,

where μ is the mean (the outset moment). Thus, the zeroth central moment is the sum of all probabilities when yard = 0 , which gives ν 0 = one . The first central moment is ν 1 = 0 . The second central moment ν ii is the variance σ two , that is, ν ii = σ 2 .

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128172162000090

Sampling distributions

Kandethody M. Ramachandran , Chris P. Tsokos , in Mathematical Statistics with Applications in R (Third Edition), 2021

4.iv The normal approximation to the binomial distribution

We know that a binomial random variable Y, with parameters north and P  = P(success), can be viewed as the number of successes in north trials and can exist written as:

Y = i = 1 north X i

where

X i = { i with probability p 0 with probability ( ane p ) .

The fraction of successes in n trials is:

Y n = 1 northward i = 1 n Ten i = 10 ¯ .

Hence, Y/due north is a sample mean. Since Due east(X i )  =   P and Var (X i )  =   P(i – P), we take:

East ( Y n ) = E ( 1 n i = 1 n Ten i ) = one n n p = p

and

5 a r ( Y north ) = 1 due north 2 i = one n V a r ( Ten i ) = p ( ane p ) northward .

Because Y = north Ten ¯ , by the central limit theorem, Y has an gauge normal distribution with mean μ   = np and variance σii  = np(one – P). Because the calculation of the binomial probabilities is cumbersome for big sample sizes n, the normal approximation to the binomial distribution is widely used. A useful rule of pollex for use of the normal approximation to the binomial distribution is to make sure northward is large enough if np    5 and n(i – P)     5. Otherwise, the binomial distribution may be so disproportionate that the normal distribution may not provide a good approximation. Other rules, such as np    ten and n(1 – P)    10, or np(i – P)     ten, are also used in the literature. Because all of these rules are only approximations, for consistency's sake we will use np    5 and n(ane – P)     5 to test for largeness of sample size in the normal approximation to the binomial distribution. If the demand arises, we could employ the more stringent condition np(1 – P)     ten.

Call back that discrete random variables take no values between integers, and their probabilities are concentrated at the integers as shown in Fig. four.seven. Yet, the normal random variables take zip probability at these integers; they have nonzero probability simply over intervals. Considering we are approximating a detached distribution with a continuous distribution, nosotros need to introduce a correction factor for continuity which is explained next.

Effigy iv.7. Probability office of discrete random variable X.

Correction for continuity for the normal approximation to the binomial distribution

(a)

To gauge P(X  a) or P(X>a), the correction for continuity is (a  +   0.5), that is,

P ( 10 a ) = P ( Z < ( a + 0.v ) north p due north p ( ane p ) )

and

P ( Ten > a ) = P ( Z > ( a + 0.5 ) north p due north p ( i p ) ) .

(b)

To estimate P(X  a) or P(X<a), the correction for continuity is (a − 0.5), that is,

P ( X a ) = P ( Z > ( a 0.5 ) north p n p ( 1 p ) )

and

P ( Ten < a ) = P ( Z < ( a 0.v ) n p due north p ( 1 p ) ) .

(c)

To judge P(a  X  b), care for ends of the intervals separately, calculating two distinct z-values according to steps (a) and (b), that is,

P ( a X b ) = P ( ( a 0.5 ) n p n p ( i p ) < Z < ( b + 0.5 ) n p n p ( ane p ) ) .

(d)

Use the normal table to obtain the estimate probability of the binomial outcome.

The shaded expanse in Fig. four.8 represents the continuity correction for P(X  = i).

Figure 4.8. Continuity correction for P(X  = i).

Instance iv.iv.2

A study of parallel interchange ramps revealed that many drivers practise not use the entire length of parallel lanes for dispatch, but seek, as soon as possible, a gap in the major stream of traffic to merge. At ane site on Interstate Highway 75, 46% of drivers used less than one-third of the lane length available earlier merging. Suppose we monitor the merging pattern of a random sample of 250 drivers at this site.

(a)

What is the probability that fewer than 120 of the drivers will utilize less than one-third of the dispatch lane length earlier merging?

(b)

What is the probability that more than than 225 of the drivers will use less than one-third of the acceleration lane length earlier merging?

Solution

Offset we bank check for adequacy of the sample size:

n p = ( 250 ) ( 0 .46 ) =115 and n ( 1 p ) = ( 250 ) ( i 0 .46 ) =135 .

Both are greater than 5. Hence, we can use the normal approximation. Let X exist the number of drivers using less than 1-third of the lane length available before merging. So X can be considered to be a binomial random variable. Likewise,

μ = northward p = ( 250 ) ( 0 .46 ) =115 .0

and

σ = n p ( 1 p ) = 250 ( 0.46 ) ( 0.54 ) = seven.8804.

Thus,

(a)

P ( X < 120 ) = P ( Z < 119.5 115 7.8804 = 0.57103 ) = 0.7157 , that is, nosotros are approximately 71.57% certain that fewer than 120 drivers will use less than one-3rd of the dispatch length earlier merging.

(b)

P ( X > 225 ) = P ( Z > 225.5 115 seven.8804 = fourteen.02213 ) 0 , that is, there is well-nigh no risk that more than 225 drivers will utilise less than one-third of the acceleration lane length before merging.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012817815700004X

Pairs of Random Variables

Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012

Department 5.4 Conditional Distribution, Density and Mass Functions

five.17

For the detached random variables whose joint PMF is described by the table in Do 5.14, observe the post-obit conditional PMFs:

(a)

PThousand (yard|N=two);

(b)

PYard (m|N≥ii);

(c)

P N (n|Grand≠2).

5.18

Consider over again the random variables in Practise 5.11 that are uniformly distributed over an ellipse.

(a)

Find the conditional PDFs, fX |Y (x|y) and fY |X (y|10).

(b)

Find fX |Y>i(ten).

(c)

Find f Y|{|X| <one} (y)

5.19

Recollect the random variables of Exercise 5.12 that are uniformly distributed over the region |X| + |Y| ≤ 1.

(a)

Find the conditional PDFs, fX|Y (x|y) and fY |X (y|10).

(b)

Find the conditional CDFs, FX |Y (x|y) and FY |Ten (y|x).

(c)

Find fX |{Y > one/2}(x) and FX |{Y > ane/two}(X).

5.xx

Suppose a pair of random variables (X, Y) is uniformly distributed over a rectangular region, A: x 1 < Ten < 10 2, y i < Y < y 2. Discover the conditional PDF of (X, Y) given the conditioning event (X, Y) ɛ B, where the region B is an arbitrary region completely contained within the rectangle A as shown in the accompanying figure.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123869814500084

Introduction to Probability Theory

Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012

2.viii Discrete Random Variables

Suppose we deport an experiment, East, which has some sample infinite, S. Furthermore, permit ξ be some outcome divers on the sample space, South. Information technology is useful to define functions of the outcome ξ, Ten = f(ξ). That is, the part f has as its domain all possible outcomes associated with the experiment, E. The range of the office f volition depend upon how it maps outcomes to numerical values but in full general will exist the fix of real numbers or some role of the ready of real numbers. Formally, we have the post-obit definition.

Definition 2.9: A random variable is a real valued function of the elements of a sample space, S. Given an experiment, E, with sample space, S, the random variable X maps each possible outcome, ξ ∈ S, to a real number X(ξ) every bit specified past some rule. If the mapping X(ξ) is such that the random variable X takes on a finite or countably infinite number of values, and then we refer to Ten as a discrete random variable; whereas, if the range of X(ξ) is an uncountably infinite number of points, we refer to X every bit a continuous random variable.

Since X = f(ξ) is a random variable whose numerical value depends on the result of an experiment, we cannot describe the random variable past stating its value; rather, nosotros must give it a probabilistic description by stating the probabilities that the variable 10 takes on a specific value or values (e.thousand., Pr(Ten=3) or Pr(X > eight)). For now, we will focus on random variables which take on detached values and volition describe these random variables in terms of probabilities of the form Pr(X=ten). In the adjacent chapter when we study continuous random variables, nosotros will find this description to be insufficient and will introduce other probabilistic descriptions too.

Definition 2.10: The probability mass function (PMF), PTen (ten), of a random variable, X, is a function that assigns a probability to each possible value of the random variable, X. The probability that the random variable X takes on the specific value x is the value of the probability mass part for x. That is, PX (10) = Pr(X=x). Nosotros apply the convention that upper case variables represent random variables while lower instance variables represent stock-still values that the random variable tin can assume.

Example two.23

A discrete random variable may exist defined for the random experiment of flipping a coin. The sample infinite of outcomes is S = {H, T}. We could define the random variable X to be X(H) = 0 and 10(T) = 1. That is, the sample space H, T is mapped to the set up {0, one} by the random variable Ten. Assuming a fair coin, the resulting probability mass function is PX (0) = 1/ii and PX (50) = 1/2. Note that the mapping is not unique and we could have merely as hands mapped the sample infinite {H, T} to any other pair of real numbers (e.g., {1,2}).

Example ii.24

Suppose we repeat the experiment of flipping a fair coin n times and observe the sequence of heads and tails. A random variable, Y, could be divers to exist the number of times tails occurs in northward trials. It turns out that the probability mass function for this random variable is

P Y ( g ) = ( n k ) ( one ii ) north , k = 0 , ane , , n .

The details of how this PMF is obtained volition exist deferred until later on in this section.

Example 2.25

Over again, allow the experiment be the flipping of a coin, and this time we will continue repeating the event until the first time a heads occurs. The random variable Z volition represent the number of times until the first occurrence of a heads. In this case, the random variable Z can have on any positive integer value, 1 ≤ Z < ∞. The probability mass function of the random variable Z tin can exist worked out as follows:

Pr ( Z = northward ) = Pr ( n - 1 tails followed past 1 heads ) = ( Pr ( T ) ) n - 1 Pr ( H ) = ( 1 two ) northward - ane ( 1 2 ) = two - n .

Hence,

PZ (northward) = twon , n = i, 2, 3,….

Example two.26

In this example, we will estimate the PMF in Example two.24 via MATLAB simulation using the relative frequency approach. Suppose the experiment consists of tossing the coin north = x times and counting the number of tails. We and so echo this experiment a big number of times and count the relative frequency of each number of tails to estimate the PMF. The following MATLAB code can be used to accomplish this. Results of running this code are shown in Figure 2.3.

Figure 2.3. MATLAB Simulation results from Instance 2.26.

Try running this code using a larger value for m. You should see more than accurate relative frequency estimates as you increment 1000.

From the preceding examples, information technology should exist clear that the probability mass function associated with a random variable, 10, must obey certain backdrop. Beginning, since PX (x) is a probability it must be non-negative and no greater than 1. 2nd, if we sum PX (ten) over all x, then this is the same as the sum of the probabilities of all outcomes in the sample space, which must be equal to 1. Stated mathematically, we may conclude that

When developing the probability mass function for a random variable, it is useful to check that the PMF satisfies these properties.

In the paragraphs that follow, we listing some ordinarily used discrete random variables, along with their probability mass functions, and some real-world applications in which each might typically be used.

A. Bernoulli Random Variable

This is the simplest possible random variable and is used to represent experiments which have two possible outcomes. These experiments are chosen Bernoulli trials and the resulting random variable is called a Bernoulli random variable. It is most common to associate the values {0,1} with the 2 outcomes of the experiment. If X is a Bernoulli random variable, its probability mass function is of the form

(2.34) P X ( 0 ) = 1 - p , P X ( 1 ) = p .

The money tossing experiment would produce a Bernoulli random variable. In that case, we may map the upshot H to the value X = 1 and T to X = 0. Also, we would use the value p = 1/2 assuming that the coin is fair. Examples of engineering applications might include radar systems where the random variable could betoken the presence (X = 1) or absence (10 = 0) of a target, or a digital advice system where X = 1 might indicate a bit was transmitted in error while Ten = 0 would signal that the chip was received correctly. In these examples, nosotros would probably expect that the value of p would exist much smaller than 1/2.

B. Binomial Random Variable

Consider repeating a Bernoulli trial n times, where the consequence of each trial is independent of all others. The Bernoulli trial has a sample space of Due south = {0,1} and we say that the repeated experiment has a sample infinite of Southn = { 0, i } northward , which is referred to every bit a Cartesian infinite. That is, outcomes of the repeated trials are represented equally n element vectors whose elements are taken from S. Consider, for example, the issue

(2.35).

The probability of this outcome occurring is

(2.36) Pr ( ξ k ) = Pr ( i , 1 , , one , 0 , 0 , , 0 ) = Pr ( 1 ) Pr ( 1 ) Pr ( ane ) Pr ( 0 ) Pr ( 0 ) Pr ( 0 ) = ( Pr ( 1 ) ) k ( Pr ( 0 ) ) n - k = p k ( 1 - p ) n - k .

In fact, the order of the one'southward and 0'south in the sequence is irrelevant. Any outcome with exactly k one's and due northk 0'due south would have the same probability. Now let the random variable X represent the number of times the outcome 1 occurred in the sequence of due north trials. This is known as a binomial random variable and takes on integer values from 0 to n. To discover the probability mass function of the binomial random variable, let Athou exist the set of all outcomes which accept exactly k 1's and northwardchiliad 0's. Notation that all outcomes in this effect occur with the same probability. Furthermore, all outcomes in this issue are mutually exclusive. Then,

PX (k) = Pr(Ak ) = (# of outcomes in Ak )*(probability of each issue in Ak

(ii.37) ( n 1000 ) p k ( one - p ) n - thousand , m = 0 , i , 2 , , n .

The number of outcomes in the result Ak is merely the number of combinations of n objects taken thou at a time. Referring to Theorem 2.7, this is the binomial coefficient,

As a check, nosotros verify that this probability mass office is properly normalized:

(2.39) 1000 = 0 n ( n 1000 ) p g ( 1 - p ) due north - grand = ( p + 1 - p ) n = 1 n = 1.

In the above calculation, we take used the binomial expansion

(two.forty) ( a + b ) n = yard = 0 n ( n k ) a k b north - m .

Binomial random variables occur, in practice, any time Bernoulli trials are repeated. For example, in a digital communication organisation, a package of n bits may exist transmitted and nosotros might exist interested in the number of bits in the parcel that are received in mistake. Or, perhaps a bank manager might exist interested in the number of tellers that are serving customers at a given point in time. Similarly, a medical technician might want to know how many cells from a blood sample are white and how many are reddish. In Example 2.23, the coin tossing experiment was repeated n times and the random variable Y represented the number of times tails occurred in the sequence of n tosses. This is a repetition of a Bernoulli trial, and hence the random variable Y should exist a binomial random variable with p = 1/two (assuming the money is fair).

C. Poisson Random Variable

Consider a binomial random variable, X, where the number of repeated trials, n, is very large. In that instance, evaluating the binomial coefficients can pose numerical bug. If the probability of success in each individual trial, p, is very pocket-size, then the binomial random variable can be well approximated by a Poisson random variable. That is, the Poisson random variable is a limiting example of the binomial random variable. Formally, let n approach infinity and p approach 0 in such a manner that lim due north n p = α . And so, the binomial probability mass part converges to the class

(two.41) P X ( k ) = α g thousand ! e - α , m = 0 , 1 , 2 , ,

which is the probability mass function of a Poisson random variable. Nosotros come across that the Poisson random variable is properly normalized by noting that

(2.42) m = 0 α m g ! e - α = east - α due east α = one ,

(run across Equation Due east.14 in Appendix E). The Poisson random variable is extremely important equally information technology describes the beliefs of many concrete phenomena. It is commonly used in queuing theory and in advice networks. The number of customers arriving at a cashier in a shop during some fourth dimension interval may be well modeled as a Poisson random variable as may the number of data packets arriving at a node in a reckoner network. We will see increasingly in later chapters that the Poisson random variable plays a fundamental role in our development of a probabilistic description of noise.

D. Geometric Random Variable

Consider repeating a Bernoulli trial until the first occurrence of the upshot ξ0. If X represents the number of times the outcome ξi occurs before the first occurrence of ξ0, and so Ten is a geometric random variable whose probability mass office is

(2.43) P X ( g ) = ( ane - p ) p 1000 , g = 0 , 1 , 2 , .

We might also formulate the geometric random variable in a slightly unlike style. Suppose X counted the number of trials that were performed until the first occurrence of ξ0. Then, the probability mass function would take on the form,

(2.44) P X ( k ) = ( one - p ) p k - 1 , chiliad = ane , ii , 3 , .

The geometric random variable can also be generalized to the case where the issue ξ0 must occur exactly 1000 times. That is, the generalized geometric random variable counts the number of Bernoulli trials that must be repeated until the mth occurrence of the outcome ξ0. We tin can derive the form of the probability mass office for the generalized geometric random variable from what we know about binomial random variables. For the thousandth occurrence of ξ0 to occur on the £th trial, the first chiliad−1 trials must have had m−1 occurrences of ξ0 and thoum occurrences of ξ1. Then

P10 (k) = Pr({((m − one) occurrences of ξ0 in k − ane) trials } ∩ {ξ0 occurs on the kthursday trial}

(2.45) ( thousand - 1 thou - 1 ) p m - yard ( 1 - p ) m - ane ( 1 - p ) = ( k - 1 g - i ) p yard - chiliad ( ane - p ) due north , grand - g , m + i , m + two ,

This generalized geometric random variable sometimes goes by the name of a Pascal random variable or the negative binomial random variable.

Of grade, one tin can define many other random variables and develop the associated probability mass functions. We have chosen to introduce some of the more important discrete random variables here. In the next affiliate, we will introduce some continuous random variables and the appropriate probabilistic descriptions of these random variables. Yet, to shut out this chapter, we provide a section showing how some of the material covered herein can be used in at least i engineering application.

Read full affiliate

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780123869814500059

Introduction

Marking A. Pinsky , Samuel Karlin , in An Introduction to Stochastic Modeling (Fourth Edition), 2011

ane.2.3 Moments and Expected Values

If X is a discrete random variable, then its m th moment is given by

(1.6) E [ X m ] = i ten i m Pr { Ten = x i }

[where the 10i are specified in (one.1)], provided that the infinite sum converges absolutely. Where the infinite sum diverges, the moment is said not to exist. If Ten is a continuous random variable with probability density function f (x), then its mth moment is given by

(1.7) East [ X thou ] = + x m f ( x ) d 10 ,

provided that this integral converges absolutely.

The first moment, corresponding to m = ane, is commonly called the mean or expected value of Ten and written gX or μ 10 . The one thousand th central moment of X is defined as the chiliad thursday moment of the random variable XμX , provided that μX exists. The first central moment is zero. The 2nd central moment is called the variance of 10 and written σ x 2 or Var[Ten]. We have the equivalent formulas Var[Ten] = E [(X − μ)2] = E [X 2] − μ2.

The median of a random variable X is any value v with the property that

Pr { Ten 5 } one ii and Pr { X v } ane ii .

If Ten is a random variable and one thousand is a function, then Y = g (X) is also a random variable. If X is a discrete random variable with possible values x ane, 10 2, …, then the expectation of g (Ten) is given by

(1.8) E [ thou ( X ) ] = i = one thou ( x i ) Pr { 10 x i } ,

provided that the sum converges admittedly. If Ten is continuous and has the probability density function fX, then the expected value of g (X) is evaluated from

(ane.ix) E [ m ( X ) ] = 1000 ( x ) f X ( x ) d x .

The general formula, covering both the discrete and continuous cases, is

(1.10) E [ g ( X ) ] = g ( x ) d F X ( ten ) ,

where FX is the distribution function of the random variable X. Technically speaking, the integral in (1.x) is a Lebesgue–Stieltjes integral. We do not require knowledge of such integrals in this text, just interpret (1.x) to signify (i.eight) when X is a detached random variable, and to stand for (1.9) when X possesses a probability density fX.

Let FY (y) = Pr{Yy} announce the distribution part for Y = chiliad (X). When X is a discrete random variable, and then

E [ Y ] = j y j Pr { Y = y j } = i g ( 10 i ) Pr { 10 = x i }

if yi = g (10i ) and provided that the second sum converges admittedly. In general,

(1.11) E [ Y ] = y d F Y ( y ) = k ( x ) d F X ( x ) .

If X is a discrete random variable, and so so is Y = yard (X). Information technology may be, withal, that X is a continuous random variable, while Y is discrete (the reader should provide an example). Fifty-fifty so, 1 may compute Eastward [Y] from either form in (1.eleven) with the same upshot.

Read full chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780123814166000010

RANDOM VARIABLES AND EXPECTATION

Sheldon Thousand. Ross , in Introduction to Probability and Statistics for Engineers and Scientists (Fourth Edition), 2009

EXAMPLE 4.2a

Consider a random variable Ten that is equal to 1,2, or 3. If we know that

p ( one ) i 2 and p(two) = ane 3

and then it follows (since p (ane) + p (2) + p (3) = 1) that

A graph of p(ten) is presented in Figure 4.one.

FIGURE 4.1. Graph of (p)x, Instance 4.2a.

The cumulative distribution function F can be expressed in terms of p(x) by

If 10 is a discrete random variable whose set of possible values are x 1, x 2, x 3,…, where 10 ane < x ii < x 3 <…, then its distribution function F is a step part. That is, the value of F is constant in the intervals [10i-1 , xi ) and then takes a step (or spring) of size p(xi ) at xi .

For instance, suppose X has a probability mass part given (as in EXAMPLE 4.2a) by

Then the cumulative distribution part F of X is given by

F ( a ) = { 0 a < 1 ane two ane a < ii 5 6 two a < three 1 3 a

This is graphically presented in Figure four.2.

Effigy four.2. Graph of F(ten).

Whereas the set of possible values of a discrete random variable is a sequence, we often must consider random variables whose prepare of possible values is an interval. Allow X be such a random variable. We say that Ten is a continuous random variable if in that location exists a nonnegative office f (10), defined for all existent ten ɛ (-∞,∞), having the property that for whatsoever fix B of real numbers

(4.2.1) P { Ten B } = B f ( x ) d x

The office f (x) is chosen the probability density function of the random variable X .

In words, Equation iv.2.i states that the probability that X volition be in B may be obtained by integrating the probability density role over the prepare B. Since X must assume some value, f(10) must satisfy

All probability statements nearly Ten can be answered in terms of f (x). For case, letting B =[a, b ], we obtain from Equation iv.2.one that

(iv.2.2) P { a 10 b } = a b f ( ten ) d ten

If we let a = b in the above, and so

In words, this equation states that the probability that a continuous random variable will presume whatsoever particular value is zero. (Meet Figure 4.3.)

Effigy 4.three. The probability density function f ( x ) = { e - x ten 0 0 x &lt; 0 .

The relationship between the cumulative distribution F (.) and the probability density f (.) is expressed by

F ( a ) = P { X ( - , a ) } = - a f ( x ) d x

Differentiating both sides yields

That is, the density is the derivative of the cumulative distribution function. A somewhat more intuitive interpretation of the density office may be obtained from Equation 4.2.2 equally follows:

P { a - ɛ ii X a + ɛ 2 } = a - ɛ / ii a + ɛ / 2 f ( x ) d x ɛ f ( a )

when ε is small. In other words, the probability that X volition be independent in an interval of length ε around the indicate a is approximately ε f (a). From this, nosotros see that f (a) is a measure of how probable it is that the random variable volition be about a .

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780123704832000096