The z-score of a measure is given by the following formula
[tex]z=\frac{x-\mu}{\sigma}[/tex]where x represents the measure, mu represents the mean of the distribution, and sigma represents the standard deviation.
If we have a z-score equal to zero, our measure will be
[tex]\begin{gathered} 0=\frac{x-\mu}{\sigma} \\ 0=x-\mu \\ x=\mu \end{gathered}[/tex]a z-score equal to zero represents the mean of the distribution.
For a z-score equal to 1, we have
[tex]\begin{gathered} 1=\frac{x-\mu}{\sigma} \\ \sigma=x-\mu \\ x=\mu+\sigma \end{gathered}[/tex]Then, the interval between z = 0 and z = 1 is the interval between the mean and one positive standard deviation.
[tex](\mu,\mu+\sigma)[/tex]The graph that represents this interval is the first graph.