A confidence interval (CI) is an interval which is expected to typically contain the parameter being estimated
The confidence level $\gamma$ is determined by a significance value $\alpha$ → $\gamma = 1 -\alpha$
As the sample size grows and with more data points, there's higher confidence in the accuracy of the estimated parameter value, resulting in narrower confidence intervals.
Recall that the standard deviation of a population $\sigma$ is a representation of the spread of each of the data points. Suppose an independent sample of $n$ observations $x_1,\dots,x_n$ is taken from a population with standard deviation of $\sigma$. The mean value calculated from the sample, $\bar x$, will have an associated standard error on the mean, $\sigma_{\bar x}$ given by
$$ \sigma_{\bar x} = \frac{\sigma}{\sqrt{n}} $$
Practically this tells us that when trying to estimate the value of a population mean, due to the factor $\frac{1}{\sqrt{n}}$, reducing the error on the estimate by a factor of two requires acquiring four times as many observations in the sample; reducing it by a factor of ten requires a hundred times as many observations.
Derivation of Standard Error
If $X = (x_1,\dots,x_n)$ is a sample of $n$ independent observations from a population with mean $\bar x$ and standard deviation $\sigma$. Then
$$ Var(X) = Var(x_1) + \dots + Var(x_n) = n\sigma^2 $$
We want to know the variance of the sample mean $\bar x$