Suppose That the Random Variables X and Y Are Independent and You Know Their Distributions
Detached Random Variable
A detached random variable is one that can assume only a finite, or countably infinite, number of distinct values.
From: The Joy of Finite Mathematics , 2016
Basic Concepts in Probability
Oliver C. Ibe , in Markov Processes for Stochastic Modeling (Second Edition), 2013
1.2.3 Continuous Random Variables
Discrete random variables take a gear up of possible values that are either finite or countably infinite. However, there exists another grouping of random variables that tin can assume an uncountable fix of possible values. Such random variables are chosen continuous random variables. Thus, nosotros define a random variable X to be a continuous random variable if there exists a nonnegative office f Ten (x), defined for all real ten∈(−∞,∞), having the property that for any set A of existent numbers,
The function f X (x) is chosen the probability density part (PDF) of the random variable 10 and is divers by
This ways that
Read total chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124077959000013
Detached Lifetime Models
N. Unnikrishnan Nair , ... N. Balakrishnan , in Reliability Modelling and Assay in Discrete Fourth dimension, 2018
Geeta Distribution
A discrete random variable X is said to have Geeta distribution with parameters θ and α if
The distribution is L-shaped and unimodal with
and
A recurrence relation is satisfied past the central moments of the form
Estimation of the parameters tin be done by the method of moments or by the maximum likelihood method. Moment estimates are
with and existence the sample hateful and variance. The maximum likelihood estimates are
and is iteratively obtained by solving the equation
with as the sample size and is the frequency of . For details, see Consul (1990). As the Geeta model is a member of the MPSD, the reliability properties can be readily obtained from those of the MPSD discussed in Section 3.2.2.
Read total chapter
URL:
https://www.sciencedirect.com/science/commodity/pii/B9780128019139000038
Probability Distributions of Interest
Northward. Balakrishnan , ... M.S Nikulin , in Chi-Squared Goodness of Fit Tests with Applications, 2013
8.1.iii Poisson distribution
The discrete random variable X follows the Poisson distribution with parameter , if
and we shall announce it past . It is easy to show that
then
The distribution office of 10 is
(8.4)
where
is the incomplete gamma function. Often, for big values of , to compute (8.4), we can use a normal approximation
Allow exist a sequence of independent and identically distributed random variables following the same Bernoulli distribution with parameter , with
Let
Then, uniformly for , we have
From this result, information technology follows that for big values of n,
Oft this approximation is used with the so-called continuity correction given by
We shall now describe the Poisson approximation to the binomial distribution. Allow be a sequence of binomial random variables, , , such that
Then,
In practice, this means that for "large" values of n and "pocket-size" values of p, we may approximate the binomial distribution past the Poisson distribution with parameter , that is,
It is of involvement to note that (Hodges and Le Cam, 1960)
Hence, if the probability of success in Bernoulli trials is modest, and the number of trials is large, then the number of observed successes in the trials tin can be regarded as a random variable following the Poisson distribution.
Read full chapter
URL:
https://world wide web.sciencedirect.com/scientific discipline/commodity/pii/B9780123971944000089
Multiple Random Variables
Oliver C. Ibe , in Fundamentals of Applied Probability and Random Processes (Second Edition), 2014
Section 5.7 Covariance and Correlation Coefficient
- 5.xx
-
Two detached random variables Ten and Y take the joint PMF given by
- a.
-
Are X and Y independent?
- b.
-
What is the covariance of X and Y?
- 5.21
-
Two events A and B are such that and Allow the random variable X exist divers such that X = 1 if consequence A occurs and X = 0 if event A does not occur. Similarly, let the random variable Y be defined such that Y = one if event B occurs and Y = 0 if event B does not occur.
- a.
-
Find East[X] and the variance of Ten
- b.
-
Find E[Y] and the variance of Y.
- c.
-
Notice ρ XY and determine whether or not X and Y are uncorrelated.
- 5.22
-
A fair die is tossed iii times. Let 10 be the random variable that denotes the number of ane'due south and let Y be the random variable that denotes the number of iii's. Find the correlation coefficient of X and Y.
Read full chapter
URL:
https://world wide web.sciencedirect.com/scientific discipline/article/pii/B9780128008522000055
Mathematical foundations
Xin-She Yang , in Introduction to Algorithms for Data Mining and Motorcar Learning, 2019
2.iv.1 Random variables
For a discrete random variable X with distinct values such as the number of cars passing through a junction, each value may occur with certain probability . In other words, the probability varies and is associated with the respective random variable. Traditionally, an uppercase letter such as 10 is used to denote a random variable, whereas a lowercase letter of the alphabet such as represents its values. For example, if X means a money-flipping consequence, then (tail) or 1 (head). A probability part is a function that assigns probabilities to all the detached values of the random variable Ten.
As an event must occur within a sample space, the requirement that all the probabilities must exist summed to one, which leads to
(2.33)
For example, the outcomes of tossing a fair coin grade a sample space. The outcome of a head (H) is an outcome with probability , and the outcome of a tail (T) is likewise an event with probability . The sum of both probabilities should be 1, that is,
(2.34)
The cumulative probability function of X is defined by
(2.35)
Two main measures for a random variable 10 with given probability distribution are its mean and variance. The mean μ or expectation of is defined past
(2.36)
for a continuous distribution and the integration is inside the integration limits. If the random variable is discrete, then the integration becomes the weighted sum
(ii.37)
The variance is the expectation value of the divergence squared, that is, . We have
(2.38)
The square root of the variance is called the standard divergence, which is simply σ.
The higher up definition of mean is substantially the first moment if nosotros define the 1000thursday moment of a random variable X (with a probability density distribution ) by
(2.39)
Similarly, we can define the kth central moment by
(two.40)
where μ is the mean (the outset moment). Thus, the zeroth central moment is the sum of all probabilities when , which gives . The first central moment is . The second central moment is the variance , that is, .
Read total chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128172162000090
Sampling distributions
Kandethody M. Ramachandran , Chris P. Tsokos , in Mathematical Statistics with Applications in R (Third Edition), 2021
4.iv The normal approximation to the binomial distribution
We know that a binomial random variable Y, with parameters north and P = P(success), can be viewed as the number of successes in north trials and can exist written as:
where
The fraction of successes in n trials is:
Hence, Y/due north is a sample mean. Since Due east(X i ) = P and Var (X i ) = P(i – P), we take:
and
Because by the central limit theorem, Y has an gauge normal distribution with mean μ = np and variance σii = np(one – P). Because the calculation of the binomial probabilities is cumbersome for big sample sizes n, the normal approximation to the binomial distribution is widely used. A useful rule of pollex for use of the normal approximation to the binomial distribution is to make sure northward is large enough if np ≥ 5 and n(i – P) ≥ 5. Otherwise, the binomial distribution may be so disproportionate that the normal distribution may not provide a good approximation. Other rules, such as np ≥ ten and n(1 – P) ≥ 10, or np(i – P) ≥ ten, are also used in the literature. Because all of these rules are only approximations, for consistency's sake we will use np ≥ 5 and n(ane – P) ≥ 5 to test for largeness of sample size in the normal approximation to the binomial distribution. If the demand arises, we could employ the more stringent condition np(1 – P) ≥ ten.
Call back that discrete random variables take no values between integers, and their probabilities are concentrated at the integers as shown in Fig. four.seven. Yet, the normal random variables take zip probability at these integers; they have nonzero probability simply over intervals. Considering we are approximating a detached distribution with a continuous distribution, nosotros need to introduce a correction factor for continuity which is explained next.
Effigy iv.7. Probability office of discrete random variable X.
Correction for continuity for the normal approximation to the binomial distribution
- (a)
-
To gauge P(X ≤ a) or P(X>a), the correction for continuity is (a + 0.5), that is,
and
- (b)
-
To estimate P(X ≥ a) or P(X<a), the correction for continuity is (a − 0.5), that is,
and
- (c)
-
To judge P(a ≤ X ≤ b), care for ends of the intervals separately, calculating two distinct z-values according to steps (a) and (b), that is,
- (d)
-
Use the normal table to obtain the estimate probability of the binomial outcome.
The shaded expanse in Fig. four.8 represents the continuity correction for P(X = i).
Figure 4.8. Continuity correction for P(X = i).
Instance iv.iv.2
A study of parallel interchange ramps revealed that many drivers practise not use the entire length of parallel lanes for dispatch, but seek, as soon as possible, a gap in the major stream of traffic to merge. At ane site on Interstate Highway 75, 46% of drivers used less than one-third of the lane length available earlier merging. Suppose we monitor the merging pattern of a random sample of 250 drivers at this site.
- (a)
-
What is the probability that fewer than 120 of the drivers will utilize less than one-third of the dispatch lane length earlier merging?
- (b)
-
What is the probability that more than than 225 of the drivers will use less than one-third of the acceleration lane length earlier merging?
Solution
Offset we bank check for adequacy of the sample size:
Both are greater than 5. Hence, we can use the normal approximation. Let X exist the number of drivers using less than 1-third of the lane length available before merging. So X can be considered to be a binomial random variable. Likewise,
and
Thus,
- (a)
-
, that is, nosotros are approximately 71.57% certain that fewer than 120 drivers will use less than one-3rd of the dispatch length earlier merging.
- (b)
-
that is, there is well-nigh no risk that more than 225 drivers will utilise less than one-third of the acceleration lane length before merging.
Read total chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012817815700004X
Pairs of Random Variables
Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012
Department 5.4 Conditional Distribution, Density and Mass Functions
- five.17
-
For the detached random variables whose joint PMF is described by the table in Do 5.14, observe the post-obit conditional PMFs:
- (a)
-
PThousand (yard|N=two);
- (b)
-
PYard (m|N≥ii);
- (c)
-
P N (n|Grand≠2).
- 5.18
-
Consider over again the random variables in Practise 5.11 that are uniformly distributed over an ellipse.
- (a)
-
Find the conditional PDFs, fX |Y (x|y) and fY |X (y|10).
- (b)
-
Find fX |Y>i(ten).
- (c)
-
Find f Y|{|X| <one} (y)
- 5.19
-
Recollect the random variables of Exercise 5.12 that are uniformly distributed over the region |X| + |Y| ≤ 1.
- (a)
-
Find the conditional PDFs, fX|Y (x|y) and fY |X (y|10).
- (b)
-
Find the conditional CDFs, FX |Y (x|y) and FY |Ten (y|x).
- (c)
-
Find fX |{Y > one/2}(x) and FX |{Y > ane/two}(X).
- 5.xx
-
Suppose a pair of random variables (X, Y) is uniformly distributed over a rectangular region, A: x 1 < Ten < 10 2, y i < Y < y 2. Discover the conditional PDF of (X, Y) given the conditioning event (X, Y) ɛ B, where the region B is an arbitrary region completely contained within the rectangle A as shown in the accompanying figure.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123869814500084
Introduction to Probability Theory
Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012
2.viii Discrete Random Variables
Suppose we deport an experiment, East, which has some sample infinite, S. Furthermore, permit ξ be some outcome divers on the sample space, South. Information technology is useful to define functions of the outcome ξ, Ten = f(ξ). That is, the part f has as its domain all possible outcomes associated with the experiment, E. The range of the office f volition depend upon how it maps outcomes to numerical values but in full general will exist the fix of real numbers or some role of the ready of real numbers. Formally, we have the post-obit definition.
Definition 2.9: A random variable is a real valued function of the elements of a sample space, S. Given an experiment, E, with sample space, S, the random variable X maps each possible outcome, ξ ∈ S, to a real number X(ξ) every bit specified past some rule. If the mapping X(ξ) is such that the random variable X takes on a finite or countably infinite number of values, and then we refer to Ten as a discrete random variable; whereas, if the range of X(ξ) is an uncountably infinite number of points, we refer to X every bit a continuous random variable.
Since X = f(ξ) is a random variable whose numerical value depends on the result of an experiment, we cannot describe the random variable past stating its value; rather, nosotros must give it a probabilistic description by stating the probabilities that the variable 10 takes on a specific value or values (e.thousand., Pr(Ten=3) or Pr(X > eight)). For now, we will focus on random variables which take on detached values and volition describe these random variables in terms of probabilities of the form Pr(X=ten). In the adjacent chapter when we study continuous random variables, nosotros will find this description to be insufficient and will introduce other probabilistic descriptions too.
Definition 2.10: The probability mass function (PMF), PTen (ten), of a random variable, X, is a function that assigns a probability to each possible value of the random variable, X. The probability that the random variable X takes on the specific value x is the value of the probability mass part for x. That is, PX (10) = Pr(X=x). Nosotros apply the convention that upper case variables represent random variables while lower instance variables represent stock-still values that the random variable tin can assume.
Example two.23
A discrete random variable may exist defined for the random experiment of flipping a coin. The sample infinite of outcomes is S = {H, T}. We could define the random variable X to be X(H) = 0 and 10(T) = 1. That is, the sample space H, T is mapped to the set up {0, one} by the random variable Ten. Assuming a fair coin, the resulting probability mass function is PX (0) = 1/ii and PX (50) = 1/2. Note that the mapping is not unique and we could have merely as hands mapped the sample infinite {H, T} to any other pair of real numbers (e.g., {1,2}).
Example ii.24
Suppose we repeat the experiment of flipping a fair coin n times and observe the sequence of heads and tails. A random variable, Y, could be divers to exist the number of times tails occurs in northward trials. It turns out that the probability mass function for this random variable is
The details of how this PMF is obtained volition exist deferred until later on in this section.
Example 2.25
Over again, allow the experiment be the flipping of a coin, and this time we will continue repeating the event until the first time a heads occurs. The random variable Z volition represent the number of times until the first occurrence of a heads. In this case, the random variable Z can have on any positive integer value, 1 ≤ Z < ∞. The probability mass function of the random variable Z tin can exist worked out as follows:
Hence,
PZ (northward) = two−n , n = i, 2, 3,….
Example two.26
In this example, we will estimate the PMF in Example two.24 via MATLAB simulation using the relative frequency approach. Suppose the experiment consists of tossing the coin north = x times and counting the number of tails. We and so echo this experiment a big number of times and count the relative frequency of each number of tails to estimate the PMF. The following MATLAB code can be used to accomplish this. Results of running this code are shown in Figure 2.3.
Figure 2.3. MATLAB Simulation results from Instance 2.26.
Try running this code using a larger value for m. You should see more than accurate relative frequency estimates as you increment 1000.
From the preceding examples, information technology should exist clear that the probability mass function associated with a random variable, 10, must obey certain backdrop. Beginning, since PX (x) is a probability it must be non-negative and no greater than 1. 2nd, if we sum PX (ten) over all x, then this is the same as the sum of the probabilities of all outcomes in the sample space, which must be equal to 1. Stated mathematically, we may conclude that
When developing the probability mass function for a random variable, it is useful to check that the PMF satisfies these properties.
In the paragraphs that follow, we listing some ordinarily used discrete random variables, along with their probability mass functions, and some real-world applications in which each might typically be used.
A. Bernoulli Random Variable
This is the simplest possible random variable and is used to represent experiments which have two possible outcomes. These experiments are chosen Bernoulli trials and the resulting random variable is called a Bernoulli random variable. It is most common to associate the values {0,1} with the 2 outcomes of the experiment. If X is a Bernoulli random variable, its probability mass function is of the form
(2.34)
The money tossing experiment would produce a Bernoulli random variable. In that case, we may map the upshot H to the value X = 1 and T to X = 0. Also, we would use the value p = 1/2 assuming that the coin is fair. Examples of engineering applications might include radar systems where the random variable could betoken the presence (X = 1) or absence (10 = 0) of a target, or a digital advice system where X = 1 might indicate a bit was transmitted in error while Ten = 0 would signal that the chip was received correctly. In these examples, nosotros would probably expect that the value of p would exist much smaller than 1/2.
B. Binomial Random Variable
Consider repeating a Bernoulli trial n times, where the consequence of each trial is independent of all others. The Bernoulli trial has a sample space of Due south = {0,1} and we say that the repeated experiment has a sample infinite of Southn = { 0, i } northward , which is referred to every bit a Cartesian infinite. That is, outcomes of the repeated trials are represented equally n element vectors whose elements are taken from S. Consider, for example, the issue
(2.35).
The probability of this outcome occurring is
(2.36)
In fact, the order of the one'southward and 0'south in the sequence is irrelevant. Any outcome with exactly k one's and due north − k 0'due south would have the same probability. Now let the random variable X represent the number of times the outcome 1 occurred in the sequence of due north trials. This is known as a binomial random variable and takes on integer values from 0 to n. To discover the probability mass function of the binomial random variable, let Athou exist the set of all outcomes which accept exactly k 1's and northward − chiliad 0's. Notation that all outcomes in this effect occur with the same probability. Furthermore, all outcomes in this issue are mutually exclusive. Then,
PX (k) = Pr(Ak ) = (# of outcomes in Ak )*(probability of each issue in Ak
(ii.37)
The number of outcomes in the result Ak is merely the number of combinations of n objects taken thou at a time. Referring to Theorem 2.7, this is the binomial coefficient,
As a check, nosotros verify that this probability mass office is properly normalized:
(2.39)
In the above calculation, we take used the binomial expansion
(two.forty)
Binomial random variables occur, in practice, any time Bernoulli trials are repeated. For example, in a digital communication organisation, a package of n bits may exist transmitted and nosotros might exist interested in the number of bits in the parcel that are received in mistake. Or, perhaps a bank manager might exist interested in the number of tellers that are serving customers at a given point in time. Similarly, a medical technician might want to know how many cells from a blood sample are white and how many are reddish. In Example 2.23, the coin tossing experiment was repeated n times and the random variable Y represented the number of times tails occurred in the sequence of n tosses. This is a repetition of a Bernoulli trial, and hence the random variable Y should exist a binomial random variable with p = 1/two (assuming the money is fair).
C. Poisson Random Variable
Consider a binomial random variable, X, where the number of repeated trials, n, is very large. In that instance, evaluating the binomial coefficients can pose numerical bug. If the probability of success in each individual trial, p, is very pocket-size, then the binomial random variable can be well approximated by a Poisson random variable. That is, the Poisson random variable is a limiting example of the binomial random variable. Formally, let n approach infinity and p approach 0 in such a manner that . And so, the binomial probability mass part converges to the class
(two.41)
which is the probability mass function of a Poisson random variable. Nosotros come across that the Poisson random variable is properly normalized by noting that
(2.42)
(run across Equation Due east.14 in Appendix E). The Poisson random variable is extremely important equally information technology describes the beliefs of many concrete phenomena. It is commonly used in queuing theory and in advice networks. The number of customers arriving at a cashier in a shop during some fourth dimension interval may be well modeled as a Poisson random variable as may the number of data packets arriving at a node in a reckoner network. We will see increasingly in later chapters that the Poisson random variable plays a fundamental role in our development of a probabilistic description of noise.
D. Geometric Random Variable
Consider repeating a Bernoulli trial until the first occurrence of the upshot ξ0. If X represents the number of times the outcome ξi occurs before the first occurrence of ξ0, and so Ten is a geometric random variable whose probability mass office is
(2.43)
We might also formulate the geometric random variable in a slightly unlike style. Suppose X counted the number of trials that were performed until the first occurrence of ξ0. Then, the probability mass function would take on the form,
(2.44)
The geometric random variable can also be generalized to the case where the issue ξ0 must occur exactly 1000 times. That is, the generalized geometric random variable counts the number of Bernoulli trials that must be repeated until the mth occurrence of the outcome ξ0. We tin can derive the form of the probability mass office for the generalized geometric random variable from what we know about binomial random variables. For the thousandth occurrence of ξ0 to occur on the £th trial, the first chiliad−1 trials must have had m−1 occurrences of ξ0 and thou−m occurrences of ξ1. Then
P10 (k) = Pr({((m − one) occurrences of ξ0 in k − ane) trials } ∩ {ξ0 occurs on the kthursday trial}
(2.45)
This generalized geometric random variable sometimes goes by the name of a Pascal random variable or the negative binomial random variable.
Of grade, one tin can define many other random variables and develop the associated probability mass functions. We have chosen to introduce some of the more important discrete random variables here. In the next affiliate, we will introduce some continuous random variables and the appropriate probabilistic descriptions of these random variables. Yet, to shut out this chapter, we provide a section showing how some of the material covered herein can be used in at least i engineering application.
Read full affiliate
URL:
https://www.sciencedirect.com/science/commodity/pii/B9780123869814500059
Introduction
Marking A. Pinsky , Samuel Karlin , in An Introduction to Stochastic Modeling (Fourth Edition), 2011
ane.2.3 Moments and Expected Values
If X is a discrete random variable, then its m th moment is given by
(1.6)
[where the 10i are specified in (one.1)], provided that the infinite sum converges absolutely. Where the infinite sum diverges, the moment is said not to exist. If Ten is a continuous random variable with probability density function f (x), then its mth moment is given by
(1.7)
provided that this integral converges absolutely.
The first moment, corresponding to m = ane, is commonly called the mean or expected value of Ten and written gX or μ 10 . The one thousand th central moment of X is defined as the chiliad thursday moment of the random variable X − μX , provided that μX exists. The first central moment is zero. The 2nd central moment is called the variance of 10 and written or Var[Ten]. We have the equivalent formulas Var[Ten] = E [(X − μ)2] = E [X 2] − μ2.
The median of a random variable X is any value v with the property that
If Ten is a random variable and one thousand is a function, then Y = g (X) is also a random variable. If X is a discrete random variable with possible values x ane, 10 2, …, then the expectation of g (Ten) is given by
(1.8)
provided that the sum converges admittedly. If Ten is continuous and has the probability density function fX, then the expected value of g (X) is evaluated from
(ane.ix)
The general formula, covering both the discrete and continuous cases, is
(1.10)
where FX is the distribution function of the random variable X. Technically speaking, the integral in (1.x) is a Lebesgue–Stieltjes integral. We do not require knowledge of such integrals in this text, just interpret (1.x) to signify (i.eight) when X is a detached random variable, and to stand for (1.9) when X possesses a probability density fX.
Let FY (y) = Pr{Y ≤ y} announce the distribution part for Y = chiliad (X). When X is a discrete random variable, and then
if yi = g (10i ) and provided that the second sum converges admittedly. In general,
(1.11)
If X is a discrete random variable, and so so is Y = yard (X). Information technology may be, withal, that X is a continuous random variable, while Y is discrete (the reader should provide an example). Fifty-fifty so, 1 may compute Eastward [Y] from either form in (1.eleven) with the same upshot.
Read full chapter
URL:
https://www.sciencedirect.com/science/commodity/pii/B9780123814166000010
RANDOM VARIABLES AND EXPECTATION
Sheldon Thousand. Ross , in Introduction to Probability and Statistics for Engineers and Scientists (Fourth Edition), 2009
EXAMPLE 4.2a
Consider a random variable Ten that is equal to 1,2, or 3. If we know that
and then it follows (since p (ane) + p (2) + p (3) = 1) that
A graph of p(ten) is presented in Figure 4.one.
FIGURE 4.1. Graph of (p)x, Instance 4.2a.
The cumulative distribution function F can be expressed in terms of p(x) by
If 10 is a discrete random variable whose set of possible values are x 1, x 2, x 3,…, where 10 ane < x ii < x 3 <…, then its distribution function F is a step part. That is, the value of F is constant in the intervals [10i-1 , xi ) and then takes a step (or spring) of size p(xi ) at xi .
For instance, suppose X has a probability mass part given (as in EXAMPLE 4.2a) by
Then the cumulative distribution part F of X is given by
This is graphically presented in Figure four.2.
Effigy four.2. Graph of F(ten).
Whereas the set of possible values of a discrete random variable is a sequence, we often must consider random variables whose prepare of possible values is an interval. Allow X be such a random variable. We say that Ten is a continuous random variable if in that location exists a nonnegative office f (10), defined for all existent ten ɛ (-∞,∞), having the property that for whatsoever fix B of real numbers
(4.2.1)
The office f (x) is chosen the probability density function of the random variable X .
In words, Equation iv.2.i states that the probability that X volition be in B may be obtained by integrating the probability density role over the prepare B. Since X must assume some value, f(10) must satisfy
All probability statements nearly Ten can be answered in terms of f (x). For case, letting B =[a, b ], we obtain from Equation iv.2.one that
(iv.2.2)
If we let a = b in the above, and so
In words, this equation states that the probability that a continuous random variable will presume whatsoever particular value is zero. (Meet Figure 4.3.)
Effigy 4.three. The probability density function
The relationship between the cumulative distribution F (.) and the probability density f (.) is expressed by
Differentiating both sides yields
That is, the density is the derivative of the cumulative distribution function. A somewhat more intuitive interpretation of the density office may be obtained from Equation 4.2.2 equally follows:
when ε is small. In other words, the probability that X volition be independent in an interval of length ε around the indicate a is approximately ε f (a). From this, nosotros see that f (a) is a measure of how probable it is that the random variable volition be about a .
Read full chapter
URL:
https://world wide web.sciencedirect.com/science/article/pii/B9780123704832000096
Source: https://www.sciencedirect.com/topics/mathematics/discrete-random-variable
0 Response to "Suppose That the Random Variables X and Y Are Independent and You Know Their Distributions"
Post a Comment