The power of a test is the probability of correctly rejecting the null hypothesis under the assumption that a particular alternative value of the parameter in question is true. For example, if we are doing a right-tailed test on a mean #H_0:mu=5# (null hypothesis) vs. #H_a:mu>5# (alternative hypothesis) and if we know we are going to reject #H_0# when the sample mean #bar{x}>6#, we might be interested in the power of this test based on the assumption that #mu=7#. To do this, we would also need to know the sample size #n# and, ideally, the population standard deviation #sigma#. In a courtroom, this is analogous to convicting a guilty person. When a test has high power, then there is a lot of confidence that you will reject the null when it is false (convict the person when they are guilty).
A Type 2 error is the flip-side of the idea in the last paragraph. It is when you incorrectly fail to reject the null when it is false, and its probability can again be computed under the assumption that a particular alternative value of the parameter in question is true. In fact, for that same parameter value, #P(\mbox{Type 2 error}) = 1 - Power#. In a courtroom, a Type 2 error is acquitting a guilty person.
A Type 1 error is when you incorrectly reject the null when it is true. Its probability is typically set ahead of time as the "level of significance" of the test, and this probability is denoted by #alpha#. By setting #\alpha# ahead of time, you are determining a "rejection region" for the sample statistic that will lead to rejecting #H_0#. If #H_0# is true, your probability of landing in the rejection region equals #alpha#. In a courtroom, a Type 1 error is convicting an innocent person.
The #p#-value of a test is computed after the test statistic has been computed. It is the probability of observing a test statistic value as extreme or more extreme as what you actually observed from your sample, under the assumption that #H_0# is true. If the #p#-value is small, then you have observed something rare if the null is true. This then provides evidence against the truth of #H_0#. The smaller the #p#-value, the stronger the evidence against #H_0#. We say that the data provide "statistically significant evidence against #H_0# at level #alpha#" if the #p#-value is smaller than #alpha#.