1.11.3 - Propriedades dos estimadores

Você está aqui

Afim de fazer inferência a respeito dos estimadores de mínimos quadrados ponderados necessitamos saber algumas propriedades dos estimadores. Em maior importância sua esperança (média) e variância, pois assim vamos conhecer suas distribuições amostrais, sejam elas exatas ou assintóticas.

 

Esperança do coeficiente angular

 

Agora, vamos agora calcular $ \mathbb{E} \left[ \widehat{\beta}_{w1} \right] $ utilizando a expressão do estimador encontrada em (1.11.1.5) da seção estimadores dos parâmetros do modelo. Com isso, obtemos: 


$$\mathbb{E} \left[ \widehat{\beta}_{w1} \right] = \mathbb{E} \left[ \dfrac{\displaystyle \sum_{i=1}^{n} w_i (X_i - \overline{X}_w) Y_i}{\displaystyle \sum_{i=1}^{n} w_i (X_i - \overline{X}_w )^2 } \right] = \mathbb{E} \left[ \dfrac{ \displaystyle \sum_{i=1}^{n} w_iX_iY_i - \displaystyle \sum_{i=1}^{n} w_i\overline{X}_w Y_i}{\displaystyle \sum w_i (X_i - \overline{X}_w)^2} \right]=$$


$$= \mathbb{E} \left[ \underbrace{\dfrac{\displaystyle \sum_{i=1}^{n}w_iX_i(\beta_{w0} + \beta_{w1}X_i + \varepsilon_i)}{\displaystyle \sum_{i=1}^{n} w_i (X_i - \overline{X}_w)^2}}_{z_1}- \underbrace{\dfrac{\displaystyle \sum_{i=1}^{n} w_i\overline{X}_w (\beta_{w0} + \beta_{w1}X_i + \varepsilon_i) }{ \displaystyle \sum_{i=1}^{n} w_i (X_i - \overline{X}_w)^2}}_{z_2}\right]\quad ~~(1.11.1.6)$$

Aplicamos um pouco de álgebra para $ z_1-z_2, $ lembrando que $ \mathbb{E}(\varepsilon_i)=0, $ por isso não vamos levar em conta. Assim, temos que


$$z_1-z_2= \dfrac{\displaystyle \sum_{i=1}^{n} w_iX_i ( \beta_{w0} + \beta_{w1}X_i)}{\displaystyle \sum_{i=1}^{n} w_i (X_i - \overline{X}_w)^2} - \dfrac{\displaystyle \sum_{i=1}^{n} w_i \overline{X}_w (\beta_{w0} + \beta_{w1}X_i)}{\displaystyle \sum_{i=1}^{n} w_i(X_i - \overline{X}_w)^2}=$$


$$= \dfrac{\beta_{w0} \left( \displaystyle \sum_{i=1}^{n} w_i X_i = \displaystyle \sum_{i=1}^{n} w_i \overline{X}_w \right)+ \beta_{w1} \left( \displaystyle \sum_{i=1}^{n} w_i x^2_i-\displaystyle \sum_{i=1}^{n} w_i \overline{X}_wX_i \right)}{ \displaystyle \sum_{i=1}^{n} w_i (X_i - \overline{X}_w)^2}=$$


$$= \dfrac{\beta_{w0} \left(\displaystyle \sum_{i=1}^{n} w_i X_i - \dfrac{\displaystyle \sum_{i=1}^{n}w_i \displaystyle \sum_{i=1}^{n}w_iX_i}{\displaystyle \sum_{i=1}^{n} w_i}\right) + \beta_{w_1} \left( \displaystyle \sum_{i=1}^{n} w_i(X_i - \overline{X}_w)X_i \right)}{\displaystyle \sum_{i=1}^{n} w_i (X_i - \overline{X}_w)^2}=$$


$$= \dfrac{\beta_{w0} \left( \displaystyle \sum_{i=1}^{n} w_i X_i - \displaystyle \sum_{i=1}^{n} w_i X_i \right)}{ \displaystyle \sum_{i=1}^{n} w_i (X_i - \overline{X}_w)^2} + \dfrac{\beta_{w1}\left( \displaystyle \sum_{i=1}^{n} w_i(X_i-\overline{X}_w )^2\right)}{\displaystyle \sum_{i=1}^{n} w_i (X_i-\overline{X}_w)^2}$$

Voltamos a equação (1.11.1.6) e obtemos


$$= \mathbb{E}\left[\underbrace{\dfrac{\beta_{w0} \left( \displaystyle \sum_{i=1}^{n} w_i X_i - \displaystyle \sum_{i=1}^{n} w_i X_i \right)}{ \displaystyle \sum_{i=1}^{n} w_i (X_i - \overline{X}_w)^2}}_{=0}\right] + \mathbb{E}\left[\beta_{w1}\underbrace{\dfrac{\left( \displaystyle \sum_{i=1}^{n} w_i(X_i-\overline{X}_w )^2\right)}{\displaystyle \sum_{i=1}^{n} w_i (X_i-\overline{X}_w)^2}}_{=1}\right]=\beta_{w1}$$

Portanto, temos que $ \mathbb{E}(\widehat{\beta}_{w1})=\beta_{w1}, $ isto é, estimador não viciado.

 

Valor esperado do intercepto

 

Neste tópico, vamos calcular o valor esperado do intercepto $ \widehat{\beta}_{w0}, $ de fato:


$$\mathbb{E} \left[ \widehat{\beta}_{w0} \right]= \mathbb{E} \left[ \overline{Y}_w - \widehat{\beta}_{w1}\overline{X}_w \right] = \mathbb{E} \left[ \dfrac{\displaystyle \sum_{i=1}^{n} w_iY_i}{\displaystyle \sum_{i=1}^{n} w_i} \right] - \mathbb{E} \left[ \dfrac{\displaystyle \sum_{i=1}^{n}w_i(X_i - \overline{X}_w)Y_i}{\displaystyle \sum_{i=1}^{n}w_i(X_i - \overline{X}_w)^2}\right]\overline{X}_w=$$


$$=\mathbb{E} \left[ \underbrace{\dfrac{\displaystyle \sum_{i=1}^{n} w_i (\beta_{w0}+\beta_{w1}+\varepsilon_i)}{\displaystyle \sum_{i=1}^{n}w_i} -\overline{X}_w\displaystyle \sum_{i=1}^{n} \dfrac{w_i(X_i - \overline{X}_w)(\beta_{w0}+\beta_{w1}X_i + \varepsilon_i)}{\displaystyle \sum_{i=1}^{n} w_i(X_i-\overline{X})^2}}_{(*)}\right]\quad (1.11.1.7)$$

Aplicamos um pouco de álgebra em (*)


$$(*)=\dfrac{\displaystyle \sum_{i=1}^{n}w_i(\beta_{w0}+\beta_{w1}X_i)}{\displaystyle \sum_{i=1}^{n}w_i} - \overline{X}_w \dfrac{\displaystyle \sum_{i=1}^{n}w_i(X_i-\overline{X}_w)(\beta_{w0}+\beta_{w1}X_i)}{\displaystyle \sum_{i=1}^{n} w_i(X_i - \overline{X})^2}=$$


$$=\dfrac{\beta_{w0}\displaystyle \sum_{i=1}^{n}w_i}{\displaystyle \sum_{i=1}^{n}w_i}+\dfrac{\beta_{w1} \displaystyle \sum_{i=1}^{n}X_iw_i}{\displaystyle \sum_{i=1}^{n} w_i} - \dfrac{\beta_{w0}\overline{X}_w \displaystyle \sum_{i=1}^{n} w_i(X_i - \overline{X}_w)}{\displaystyle \sum_{i=1}^{n}w_i(X_i - \overline{X}_w)^2} - \dfrac{\beta_{w1}\overline{X}_w\displaystyle \sum_{i=1}^{n}w_i(X_i-\overline{X}_w)X_i}{\displaystyle \sum_{i=1}^{n}w_i(X_i - \overline{X}_w)^2}=$$


$$=\beta_{w0} + \beta_{w1}\overline{X}_w - \dfrac{\beta_{w0}\overline{X}_w}{\displaystyle \sum_{i=1}^{n} w_i(X_i - \overline{X}_w)^2} \left( \displaystyle \sum_{i=1}^{n}w_iX_i - \dfrac{\displaystyle \sum_{i=1}^{n}w_i\displaystyle \sum_{i=1}^{n}w_iX_i}{\displaystyle \sum_{i=1}^{n}w_i} \right) - \beta_{w1}\overline{X}_w\underbrace{\dfrac{\displaystyle \sum_{i=1}^{n}w_i(X_i-\overline{X}_w)^2}{\displaystyle \sum_{i=1}^{n}w_i(X_i - \overline{X}_w)^2}}_{=1}$$

Voltamos a equação (1.11.1.7) e obtemos


$$=\mathbb{E}[\beta_{w0}]+\mathbb{E}\left[\beta_{w1}\overline{X}_w - \dfrac{\beta_{w0}\overline{X}_w}{\displaystyle \sum_{i=1}^{n} w_i(X_i - \overline{X}_w)^2}\left( \displaystyle \sum_{i=1}^{n}w_iX_i - \displaystyle \sum_{i=1}^{n}w_iX_i\right) - \beta_{w1}\overline{X}_w \right]= \beta_{w0}$$

Portanto, temos que $ \mathbb{E}(\widehat{\beta}_{w0})=\beta_{w0}, $ isto é, estimador não viciado.

 

Variância do coeficiente angular

 

Para calcularmos a variância de $ \widehat{\beta}_{w1}, $ temos que:


$$\text{Var}(\widehat{\beta}_{w1}) = \text{Var} \left(\displaystyle \dfrac{\displaystyle\sum_{i=1}^{n}w_i (x_i-\overline{X}_w) Y_i }{\displaystyle\sum^n_{i=1}w_i(x_i-\overline{X}_w)^2}\right) \overset{Y_i~ \text{ind.}}{=}\dfrac{1 }{\displaystyle[\sum^n_{i=1}w_i(x_i-\overline{X}_w)^2]^2}\sum_{i=1}^{n}w_i (x_i-\overline{X}_w)^2\text{Var}(w^{1/2}_iY_i) =\dfrac{\sigma^2 }{\displaystyle \sum^n_{i=1}w_i(x_i-\overline{X}_w)^2}$$

 

Variância do intercepto

 

Do tópico estimação dos parâmetros do modelo, obtemos que


$$\widehat{\beta}_{w0} = \overline{Y}_w - \widehat{\beta}_{w1} \overline{X}_w$$

Com isso, temos que


$$\text{Var}(\widehat{\beta}_{w0}) = \text{Var}(\overline{Y}_w - \widehat{\beta}_{w1}\overline{X}_w) = \text{Var}(\overline{Y}_w) + \text{Var}(\widehat{\beta}_{w1}) - 2\text{Cov}(\overline{Y}_w,\widehat{\beta}_{w1}\overline{X}_w)$$

Notamos que 


$$\text{Var}(\overline{Y}_w)=\frac{1}{\left(\displaystyle\sum^n_{i=1}w_i\right)^2}\sum^n_{i=1}w_i\text{Var}(w^{1/2}_i Y_i)=\frac{\sigma^2}{\displaystyle\sum^n_{i=1}w_i}$$

e

$$\overline{X}^2_w\text{Var}(\widehat{\beta}_{w1})=\dfrac{\overline{X}^2_w\sigma^2 }{\displaystyle \sum^n_{i=1}w_i(x_i-\overline{X}_w)^2}$$

Ao substituirmos obtemos:

$$\text{Var}(\widehat{\beta}_{w0}) = \frac{\sigma^2}{\displaystyle\sum^n_{i=1}w_i}+ \dfrac{\overline{X}^2_w\sigma^2 }{\displaystyle \sum^n_{i=1}w_i(x_i-\overline{X}_w)^2}- 2\text{Cov}(\overline{Y}_w,\widehat{\beta}_{w1}\overline{X}_w)\quad\quad\quad (1.1.1.8)$$

Agora vamos calcular $ \text{Cov}(\overline{Y}_w , \widehat{\beta}_{w1} \overline{X}_w) $


$$\text{Cov}(\overline{Y}_w , \widehat{\beta}_{w1} \overline{X}_w)= \mathbb{E}\left[ \overline{Y}_w \widehat{\beta}_{w1} \overline{X}_w \right]-\mathbb{E}\left[\overline{Y}_w \right] \mathbb{E}\left[ \widehat{\beta}_{w1} \overline{X}_w \right]=\overline{X}_w \mathbb{E} \left[ \overline{Y}_w \widehat{\beta}_{w1} \right] - \mathbb{E} \left[\overline{Y}_w \right] \widehat{\beta}_{w1} \overline{X}_w =$$

$ = \overline{X}_w \mathbb{E} \left[ \dfrac{ \widehat{\beta}_{w1} \displaystyle \sum_{i=1}^{n} \left( \beta_{w0} + \beta_{w1}X_i + \varepsilon_i \right)}{ \displaystyle \sum_{i=1}^{n} w_i} \right] - \mathbb{E} \left[ \dfrac{ \displaystyle \sum_{i=1}^{n} \left( \beta_{w0} + \beta_{w1}X_i + \varepsilon_i \right)}{ \displaystyle \sum_{i=1}^{n} w_i} \right]\beta_{w1} \overline{X}_w = $

$ = \dfrac{\overline{X}_w \displaystyle \sum_{i=1}^{n} w_i \left( \mathbb{E}\left[ \widehat{\beta}_{w1}\beta_{w0} \right] + \mathbb{E}\left[ \widehat{\beta}_{w1}\beta_{w1}X_i \right] + \mathbb{E}\left[ \widehat{\beta}_{w1}\varepsilon_i \right] - \beta_{w1} \mathbb{E}\left[ \beta_{w0} \right] - \beta_{w1}\mathbb{E}\left[ \beta_{w1}X_i \right] - \beta_{w1} \mathbb{E}\left[\varepsilon_i \right]\right)}{\displaystyle \sum_{i=1}^{n} w_i}  $

mas $ \mathbb{E}\left[\varepsilon_i \right]=0 $, logo:


$$\text{Cov}(\overline{Y}_w , \widehat{\beta}_{w1} \overline{X}_w)= \dfrac{\overline{X}_w \displaystyle \sum_{i=1}^{n} w_i \left( \beta_{w1}\mathbb{E}\left[ \beta_{w0} \right] + \beta_{w1}\mathbb{E}\left[ \beta_{w1}X_i \right] + \mathbb{E}\left[ \widehat{\beta}_{w1}\varepsilon_i \right] - \beta_{w1} \mathbb{E}\left[ \beta_{w0} \right] - \beta_{w1}\mathbb{E}\left[ \beta_{w1}X_i \right]\right)}{\displaystyle \sum_{i=1}^{n} w_i}$$

Logo,


$$\text{Cov}(\overline{Y}_w , \widehat{\beta}_{w1} \overline{X}_w)=\dfrac{\overline{X}_w \displaystyle \sum_{i=1}^{n} w_i \mathbb{E}\left[ \widehat{\beta}_{w1}\varepsilon_i \right]}{\displaystyle \sum_{i=1}^{n} w_i} \quad ~~(1.11.1.9)$$

Mas, observamos que


$$\mathbb{E}\left[ \widehat{\beta}_{w1}\varepsilon_i \right] = \mathbb{E}\left[ \dfrac{\displaystyle \sum_{j=1}^{n} w_j (x_j - \overline{X}_w)Y_j}{\displaystyle \sum_{j=1}^{n}w_j (x_j - \overline{X}_w)^2} \varepsilon_i \right] = \dfrac{\displaystyle \sum_{j=1}^{n} w_j (x_j - \overline{X}_w)\mathbb{E}\left[ Y_j\varepsilon_i \right]}{\displaystyle \sum_{j=1}^{n} w_j(x_j - \overline{X}_w)^2}= \dfrac{\displaystyle \sum_{j=1}^{n} w_j (x_j - \overline{X}_w)\mathbb{E}\left[ \left( \beta_{w0} + \beta_{w1}x_j + \varepsilon_j \right)\varepsilon_i \right]}{\displaystyle \sum_{j=1}^{n} w_j(x_j - \overline{X}_w)^2} =$$

$ =\dfrac{\displaystyle \sum_{j=1}^{n} w_j (x_j - \overline{X}_w)\left( \beta_{w0}\mathbb{E}\left[\varepsilon_i \right] + \beta_{w1}x_j\mathbb{E}\left[\varepsilon_i \right] + \mathbb{E}\left[\varepsilon_j \varepsilon_i \right] \right)}{\displaystyle \sum_{j=1}^{n} w_j(x_j - \overline{X}_w)^2}= \dfrac{\displaystyle \sum_{j=1}^{n} w_j (x_j - \overline{X}_w)\left( \mathbb{E}\left[\varepsilon_j \varepsilon_i \right] \right)}{\displaystyle \sum_{j=1}^{n} w_j(x_j - \overline{X}_w)^2}  $

Assim, temos dois casos:


\[ \mathbb{E}\left[\varepsilon_j \varepsilon_i \right] = \left\{ \begin{array}{ll} \dfrac{\sigma^2}{w_i} \mbox{ se $i = j$};\\~ 0 ~\mbox{ se $i \neq j$}.\end{array} \right. \]

Consideramos $ i=j $, então obtemos


$$\text{Cov}(\overline{Y}_w , \widehat{\beta}_{w1} \overline{X}_w)=\dfrac{\overline{X}_w \displaystyle \sum_{i=1}^{n} w_i \mathbb{E}\left[ \widehat{\beta}_{w1}\varepsilon_i \right]}{\displaystyle \sum_{i=1}^{n} w_i}=\dfrac{\displaystyle \sum_{j=1}^{n} w_j (x_j - \overline{X}_w)\left( \mathbb{E}\left[\varepsilon_j \varepsilon_i \right] \right)}{\displaystyle \sum_{j=1}^{n} w_j(x_j - \overline{X}_w)^2} $$

Como para $ i \neq j $ a esperança do produto dos erros é igual a zero, consequentemente $ \text{Cov}(\overline{Y}_w , \widehat{\beta}_{w1} \overline{X}_w) $ também é. Portanto, para $ i=j $ a soma cruzada se anula, logo a expressão acima se resume: 


$$\text{Cov}(\overline{Y}_w , \widehat{\beta}_{w1} \overline{X}_w)= \dfrac{ \overline{X}_w \displaystyle \sum_{i=1}^{n}w^2_i\left(X_i - \overline{X}_w\right)\mathbb{E}\left[ \varepsilon_i^2\right] }{ \displaystyle \sum_{i=1}^{n} w_i \displaystyle \sum_{j=1}^{n} w_j(x_j - \overline{X}_w)^2} =\dfrac{ \overline{X}_w \displaystyle \sum_{i=1}^{n}w^2_i\left(X_i - \overline{X}_w\right)\dfrac{\sigma^2}{w_i}}{ \displaystyle \sum_{i=1}^{n} w_i \displaystyle \sum_{j=1}^{n} w_j(x_j - \overline{X}_w)^2}=$$

$ =\sigma^2\dfrac{ \overline{X}_w \displaystyle \sum_{i=1}^{n}w_i\left(X_i - \overline{X}_w\right) }{ \displaystyle \sum_{i=1}^{n} w_i \displaystyle \sum_{j=1}^{n} w_j(x_j - \overline{X}_w)^2}=\sigma^2\dfrac{ \overline{X}_w \displaystyle \sum_{i=1}^{n}w_iX_i -\displaystyle \sum_{i=1}^{n}w_i\overline{X}_w }{ \displaystyle \sum_{i=1}^{n} w_i \displaystyle \sum_{j=1}^{n} w_j(x_j - \overline{X}_w)^2}=0 $

Por fim, voltamos a equação (1.11.1.8) e obtemos que

$$\text{Var}(\widehat{\beta}_{w0}) =\sigma^2\left( \frac{1}{\displaystyle\sum^n_{i=1}w_i}+ \dfrac{\overline{X}^2_w }{\displaystyle \sum^n_{i=1}w_i(x_i-\overline{X}_w)^2}\right)$$

 

De forma matricial temos

$$\text{Var}(\widehat{\mathbf{\beta}}_{w}) = (\mathbf{B}^{\top}\mathbf{B})^{-1}\sigma^2 = (\mathbf{X}^{\top} V^{-1}\mathbf{X})^{-1}\sigma^2$$

 

 

Valor esperado do QME

 

Neste tópico, vamos calcular o valor esperado do quadrado médio do erro QME.

$$\text{SQE}=(\mathbf{Z}-\widehat{\mathbf{Z}})^{\top}(\mathbf{Z}-\widehat{\mathbf{Z}})= (\mathbf{Z}-\widehat{\mathbf{\mathbf{B}}\beta})^{\top}(\mathbf{Z}-\widehat{\mathbf{\mathbf{\mathbf{B}}\beta}})= \mathbf{Y}^{\top}(V-\mathbf{X}(\mathbf{X}^{\top}V^{-1}\mathbf{X})^{-1}\mathbf{X}^{\top})\mathbf{Y}$$

$$\mathbb{E}(\text{SQE})= (V-\mathbf{X}(\mathbf{X}^{\top}V^{-1}\mathbf{X})^{-1}\mathbf{X}^{\top})\sigma^2$$

Agora aplicamos o teorema para distribuições de formas quadráticas e obtemos:

$$\text{rank}(V-\mathbf{X}(\mathbf{X}^{\top}V^{-1}\mathbf{X})^{-1}\mathbf{X}^{\top})=n-(p+1)$$

Portanto, um estimador não viciado para $ \sigma^2 $ é dado por

$$\widehat{\sigma}^2=\text{QME}=\frac{\text{SQE}}{n-(p+1)}$$

 

Exemplo 1.11.2.1

Voltamos à motivação, considere a curva de calibração para o ensaio de certo composto químico realizado por um equipamento chamado Cromatógrado. A seguir, vamos calcular as variâncias dos parâmetros $ \beta_{w0} $ e $ \beta_{w1}. $

clique aqui para efetuar o download dos dados utilizados nesse exemplo

 

Área Concentração Peso $ (w_i) $ Valor ajustado $ (\widehat{Y}_i) $ Resíduo $ (Y_i-\widehat{Y}_i) $ $ w_i(Y_i-\widehat{Y}_i)^2 $
0,078 0 5,446 0,0204 0,0576 0,018
1,329 0 5,446 0,0204 1,31 9,33
0,483 0 5,446 0,0204 0,46 1,17
0,698 0 5,446 0,0204 0,68 2,50
0,634 0 5,446 0,0204 0,61 2,05
0,652 0 5,446 0,0204 0,63 2,17
0,071 0 5,446 0,0204 0,05 0,01
20,718 25 0,267 18,039 2,68 1,92
21,805 25 0,267 18,039 3,77 3,78
16,554 25 0,267 18,039 -1,48 0,59
19,948 25 0,267 18,039 1,91 0,97
21,676 25 0,267 18,039 3,64 3,53
22,207 25 0,267 18,039 4,17 4,64
19,671 25 0,267 18,039 1,63 0,71
33,833 50 2,647 36,057 -2,22 13,09
34,726 50 2,647 36,057 -1,33 4,69
35,463 50 2,647 36,057 -0,59 0,93
34,04 50 2,647 36,057 -2,02 10,77
34,194 50 2,647 36,057 -1,86 9,19
33,664 50 2,647 36,057 -2,39 15,16
34,517 50 2,647 36,057 -1,54 6,28
79,224 100 0,04014 72,093 7,13 2,04
73,292 100 0,04014 72,093 1,20 0,06
85,514 100 0,04014 72,093 13,42 7,23
82,072 100 0,04014 72,093 9,98 4,00
85,044 100 0,04014 72,093 12,95 6,73
73,876 100 0,04014 72,093 1,78 0,13
82,568 100 0,04014 72,093 10,47 4,40
108,065 150 0,01053 108,130 -0,06 0,00
118,268 150 0,01053 108,130 10,14 1,08
108,89 150 0,01053 108,130 0,76 0,01
127,183 150 0,01053 108,130 19,05 3,82
121,447 150 0,01053 108,130 13,32 1,87
122,414 150 0,01053 108,130 14,28 2,15
135,555 150 0,01053 108,130 27,43 7,92
224,932 250 0,01254 180,203 44,73 25,08
200,113 250 0,01254 180,203 19,91 4,97
200,368 250 0,01254 180,203 20,17 5,10
205,17 250 0,01254 180,203 24,97 7,81
213,059 250 0,01254 180,203 32,86 13,53
207,931 250 0,01254 180,203 27,73 9,64
201,766 250 0,01254 180,203 21,56 5,83
371,534 500 0,00470 360,385 11,15 0,58
408,86 500 0,00470 360,385 48,48 11,05
383,509 500 0,00470 360,385 23,12 2,51
405,143 500 0,00470 360,385 44,76 9,42
404,132 500 0,00470 360,385 43,75 9,00
379,243 500 0,00470 360,385 18,86 1,67
387,419 500 0,00470 360,385 27,03 3,44
5983,552 7525 59,00039 5424,486   244,56
      $ QME_w $ $ \displaystyle\sum^n_{i=1}\dfrac{w_i(Y_i-\widehat{Y}_i)}{n-2} $ 5,203371015
        Desvio padrão dos resíduos $ (\sqrt{QME_w}) $ 2,281089874

 

 

 

Tabela 1.11.2.1: Tabela para cálculos do $ QME_w. $

Dos resultados obtidos da tabela (1.11.2.1) calculamos a $ QME_w $ da seguinte forma

$$QME_w=\displaystyle\sum^n_{i=1}\dfrac{w_i(Y_i-\widehat{Y}_i)}{n-2}=\dfrac{244,56}{49-2}=5,203371015$$

Logo, o desvio padrão dos resíduos é obtido da seguinte forma

$$\widehat{\text{DP}}(QME_w)=\sqrt{\displaystyle\sum^n_{i=1}\dfrac{w_i(Y_i-\widehat{Y}_i)}{n-2}}=2,281089874$$

Agora, vamos calcular as variâncias de $ \widehat{\beta}_{w1} $ e $ \widehat{\beta}_{w0}, $ para isto observe a seguinte tabela:

 

n Área (Y) Concentração (X) Peso $ (w_i) $ $ w_i X_i $ $ w_i Y_i $ $ X_i-\overline{X}_w $ $ w_i X^2_i $ $ w_i(X_i-\overline{X}_w)^2 $
1 0,078 0 5,45 0 0,42 -17,81 0,00 1727,75
2 1,329 0 5,45 0 7,24 -17,81 0,00 1727,75
3 0,483 0 5,45 0 2,63 -17,81 0,00 1727,75
4 0,698 0 5,45 0 3,80 -17,81 0,00 1727,75
5 0,634 0 5,45 0 3,45 -17,81 0,00 1727,75
6 0,652 0 5,45 0 3,55 -17,81 0,00 1727,75
7 0,071 0 5,45 0 0,39 -17,81 0,00 1727,75
8 20,718 25 0,27 6,67 5,53 7,19 166,75 13,79
9 21,805 25 0,27 6,67 5,82 7,19 166,75 13,79
10 16,554 25 0,27 6,67 4,42 7,19 166,75 13,79
11 19,948 25 0,27 6,67 5,32 7,19 166,75 13,79
12 21,676 25 0,27 6,67 5,78 7,19 166,75 13,79
13 22,207 25 0,27 6,67 5,92 7,19 166,75 13,79
14 19,671 25 0,27 6,67 5,25 7,19 166,75 13,79
15 33,833 50 2,65 132,37 89,57 32,19 6618,62 2743,15
16 34,726 50 2,65 132,37 91,94 32,19 6618,62 2743,15
17 35,463 50 2,65 132,37 93,89 32,19 6618,62 2743,15
18 34,04 50 2,65 132,37 90,12 32,19 6618,62 2743,15
19 34,194 50 2,65 132,37 90,53 32,19 6618,62 2743,15
20 33,664 50 2,65 132,37 89,12 32,19 6618,62 2743,15
21 34,517 50 2,65 132,37 91,38 32,19 6618,62 2743,15
22 79,224 100 0,040 4,01 3,18 82,19 401,40 271,15
23 73,292 100 0,040 4,01 2,94 82,19 401,40 271,15
24 85,514 100 0,040 4,01 3,43 82,19 401,40 271,15
25 82,072 100 0,040 4,01 3,29 82,19 401,40 271,15
26 85,044 100 0,040 4,01 3,41 82,19 401,40 271,15
27 73,876 100 0,040 4,01 2,97 82,19 401,40 271,15
28 82,568 100 0,040 4,01 3,31 82,19 401,40 271,15
29 108,065 150 0,011 1,58 1,14 132,19 236,89 183,98
30 118,268 150 0,011 1,58 1,25 132,19 236,89 183,98
31 108,89 150 0,011 1,58 1,15 132,19 236,89 183,98
32 127,183 150 0,011 1,58 1,34 132,19 236,89 183,98
33 121,447 150 0,011 1,58 1,28 132,19 236,89 183,98
34 122,414 150 0,011 1,58 1,29 132,19 236,89 183,98
35 135,555 150 0,011 1,58 1,43 132,19 236,89 183,98
36 224,932 250 0,013 3,13 2,82 232,19 783,53 675,86
37 200,113 250 0,013 3,13 2,51 232,19 783,53 675,86
38 200,368 250 0,013 3,13 2,51 232,19 783,53 675,86
39 205,17 250 0,013 3,13 2,57 232,19 783,53 675,86
40 213,059 250 0,013 3,13 2,67 232,19 783,53 675,86
41 207,931 250 0,013 3,13 2,61 232,19 783,53 675,86
42 201,766 250 0,013 3,13 2,53 232,19 783,53 675,86
43 371,534 500 0,0047 2,35 1,75 482,19 1175,19 1092,95
44 408,86 500 0,0047 2,35 1,92 482,19 1175,19 1092,95
45 383,509 500 0,0047 2,35 1,80 482,19 1175,19 1092,95
46 405,143 500 0,0047 2,35 1,90 482,19 1175,19 1092,95
47 404,132 500 0,0047 2,35 1,90 482,19 1175,19 1092,95
48 379,243 500 0,0047 2,35 1,78 482,19 1175,19 1092,95
49 387,419 500 0,0047 2,35 1,82 482,19 1175,19 1092,95
Soma     59,00       65676,70 46960,42
Média ponderada 12,86 17,81         $ \text{Var}(\widehat{\beta}_{w1}) $ 0,000110803
$ QME_w $ 5,203371           $ DP (\widehat{\beta}_{w1}) $ 0,010526316
              $ \text{Var}(\widehat{\beta}_{w0}) $ 0,12334151
              $ DP (\widehat{\beta}_{w0}) $ 0,3512001

 

Tabela 1.11.2.2: Tabela para calcular as variâncias dos coeficientes.

Dos dados da tabela (1.11.2.2) obtemos os dados para calcularmos as variâncias dos coeficientes:

$$\text{Var}(\widehat{\beta}_{w0})=\dfrac{QME_w\displaystyle\sum_{i=1}^{n} w_i x^2_i }{\displaystyle \sum_{i=1}^{n} w_i\sum_{i=1}^{n} w_i (X_i - \overline{X}_w )^2}=\dfrac{5,203371\times 65676,7}{59\times 46960,42}=0,12334151$$


$$\text{Var}(\widehat{\beta}_{w1})=\dfrac{QME_w}{\displaystyle \sum_{i=1}^{n} w_i (X_i - \overline{X}_w )^2}=\dfrac{5,203371}{46960,42}=0,000110803$$

Portanto, os desvios padrões dos coeficientes são

$$\widehat{\text{DP}}(\widehat{\beta}_{w0})=\sqrt{\dfrac{QME_w\displaystyle\sum_{i=1}^{n} w_i x^2_i }{\displaystyle \sum_{i=1}^{n} w_i\sum_{i=1}^{n} w_i (X_i - \overline{X}_w )^2}}=0,3512001$$


$$\widehat{\text{DP}}(\widehat{\beta}_{w1})=\sqrt{\dfrac{QME_w}{\displaystyle \sum_{i=1}^{n} w_i (X_i - \overline{X}_w )^2}}=0,010526316$$

 

 Para entender como executar essa função do Software Action, você pode consultar o  manual do usuário.

 

 

 

Análise de Regressão

Sobre o Portal Action

O Portal Action é mantido pela Estatcamp - Consultoria Estatística e Qualidade, com o objetivo de disponibilizar uma ferramenta estatística em conjunto com uma fonte de informação útil aos profissionais interessados.

Facebook

CONTATO

  •  Maestro Joao Seppe, 900, São Carlos - SP | CEP 13561-180
  • Telefone: (16) 3376-2047
  • E-Mail: [email protected]