The probability of a Type I error is #P("Reject "H_0|H_0" is true")#. For example, it's when the sample mean is significantly different from 0, when the true population mean is not. #P("Type I error")# for #mu# is the chance of the true #mu# lying outside our confidence interval for it, and this is equal to the area under the probability distribution curve outside the C.I. for #mu# (e.g. the left and right tails).
The chance of a Type I error occurring is directly related to the width of our C.I. for the parameter. If we want to decrease the chance of Type I error, we increase the width of the C.I., which means decreasing the area we wish to have in the tails, and that is simply done by decreasing the value we use for #alpha#.
Our #alpha#-value is actually set to be equal to the total area in the tail(s). Simply put, that means #P("Type I error") = alpha#. Thus, lowering #alpha# will mean lowering the chance of Type I error to #(100 * alpha)%#.