How is beam width measured




















The comparison is with and without an aperture. Here a large area beam covering 90 pixels and a X field is measured at full intensity, roughly counts.

The beam intensity is then reduced with neutral density filters, and the width measured. The D4 method with an aperture is able to measure with less than 0. If the field was larger or the beam smaller, the differences between these two measurements are even more dramatic.

Figure 9. D4 measurement using an aperture, but with and without negative noise components vs. Figure 9 illustrates the effect of retaining the negative numbers in the noise when measuring low intensity signals. In this case, an aperture is used to minimize noise contributions for both measurements. With negative noise components removed, the beam width error measurement increases rapidly and dramatically as the intensity is reduced. However, with negative noise components retained, a measurement error of less than 0.

These measurements illustrate the effects of various parameters of measuring beam widths. They show that it is possible to accurately measure a beam as small as just a few camera pixels in width, as long as other parameters are kept at their optimum. They also show that it is possible to measure a beam of fairly low peak intensity, using proper beam measurement algorithms. The parameters that should be under control of the operator, are first, the number of pixels in the digitized camera field.

Here the operator would want to minimize the digitized camera area and number of pixels to only incorporate what is needed to include the beam. This minimizes the amount of noise being contributed beyond the wings of the beam. Secondly, both D4 and knife-edge give relatively accurate measurements of the beam. The D4 is preferred if the beam is not Gaussian. The knife-edge uses a correction factor, which is optimized for most beams, but not for all.

Third, an aperture placed around the beam increases the beam width measurement accuracy by nearly a factor of 10 in most cases. Retaining negative numbers in the noise floor can be extremely important. For high intensity beams that are large enough to fill the camera, the negative numbers are not as significant. However, if the peak beam intensity falls much less than saturation, or if a small beam of only a few pixels in a large field is measured, then retaining the negative numbers give more than a factor of 10 improvement in accuracy.

Unique Signal Processing. Retaining the negative numbers in the camera baseline field enables signal processing that would not otherwise be possible. One example of this processing is frame summing.

If the negative noise components are eliminated, as is the case with most digitizers, frame summing would cause all of the positive noise components to continually add, and ultimately produce a net positive baseline offset. This would make frame summing essentially a worthless exercise.

However, with negative numbers retained, the negative noise components subtract from the positive noise components on a frame-by-frame basis, and keep the mean noise distribution near zero. The size of the noise, however, adds roughly as the square root of the number of frames summed.

However, the signal grows roughly as the number of frames. Thus the signal-to-noise ratio improves roughly as the square root of the number of frames summed. Summing is used in the following example to show the dramatic effect that can be obtained with the retention of the negative numbers. Figure HeNe near Gaussian beam measured and displayed at nearly full intensity.

The utility of frame summing signal processing is shown with a low intensity beam. A laser beam signal was attenuated until it was buried in the noise so that the peak was approximately 1 digital count.

The noise was approximately 3 digital counts. For reference, Figure 10 shows a 3D picture of the beam at its full intensity. With attenuation, the beam was reduced to the point where it was not visible in the display. To show the accuracy of the baseline calibration, the beam was first blocked completely, and the baseline was summed over frames.

Figure 11 shows a 3D picture of this noise field. Notice the large amount of noise. The negative noise components are shown in gray, going below the colored positive noise components.

Histogram of the noise of Figure Histogram shows the number of pixel counts at each intensity level. Figure 12 shows the histogram of the sum noise baseline from Figure In the histogram the noise is centered very close to zero, with roughly a Gaussian distribution plus or minus about zero.

A close inspection of Figure 12 shows that the noise is centered slightly below zero. This is because the camera which had only been turned on for about 15 minutes was still warming up and the baseline was drifting. Thus, even though the measurement was made immediately after an Ultracal, during that period the baseline drifted just slightly, giving this small baseline offset. Spiricon's "Mantra" is "Ultracal early, and Ultracal often. Another Ultracal was performed, then immediately afterwards the 1 count beam was unblocked, and a sum of frames of that beam was made.

This beam profile is shown in Figure With an infinite number of illuminated pixels requiring infinitely small pixels , the true profile of the beam would be represented. The discretization causes errors in the beam width measurement; therefore, the beam must illuminate a minimum number of pixels for an accurate beam width measurement.

To determine the minimum amount of pixels that must be illuminated, a theoretical model was created and tested against experimental data. First, the profile of a perfect Gaussian beam was generated to represent the profile of a beam incident on a sensor see Figure 1. The beam width can be analytically determined by using the Gaussian formula. Where a, b and c are all constants. To approximate how the sensor reads the beam intensity, a number of equally sized bins m along the x-axis were created.

A bin represents one pixel; therefore, m represents the number of pixels illuminated. The beam intensity was then integrated across the width of each bin and the values normalized by setting the maximum bin value equal to the maximum of the Gaussian beam. The bins were then plotted alongside the Gaussian. We also randomly offset the Gaussian beam such that the center of the Gaussian did not always fall in the exact center of a pixel.

If the beam is perfectly aligned on the pixels, then a symmetric quantization will be seen see Figure 1a. However, if the beam is shifted slightly, then an asymmetric quantization will be seen see Figure 1b. A variety of different alignments for each m value were simulated and the percentage errors recorded see Figure 1c-Figure 1f.

Here is an example. If the average power is above 1 W, you will have to sample your beam before using attenuation filters. We offer various models of beam samplers to accommodate your attenuation needs up to W. Home Blog Laser experts When someone asks me how to measure the spot size of their laser beam, I often answer with a few questions, because this type of measurement is not as simple as it can appear at first glance, especially if you want to do it with precision and rigor.

Do you want to measure the focal spot or the beam diameter? Which beam diameter definition do you want to use? Do you have the appropriate instruments? Do you want to measure the focal spot size or the beam diameter? If we consider the ideal case of a Gaussian beam, the beam width or radius, w along the propagation axis z is defined by the following equation: where w 0 is the beam waist the smallest radius of the Gaussian beam and z R is the Rayleigh length: The beam diameter is simply twice the beam radius, and can be measured anywhere along the propagation axis.

When you focus a Gaussian beam with a lens of focal length f, the beam waist or laser spot size equation becomes: The focal spot size can therefore be very small, and when it is, the beam size varies very rapidly along the propagation axis. This stands in stark contrast to the CCD or the moving aperture method, where deviation from the transverse plane creates an apparent ellipticity.

The discussion above assumes that the shape of the beams under investigation is well known. The difference between them becomes appreciable only when the wire covers most of the central part of the beam and the wings of the intensity distribution become more important. In the theoretical analysis we had assumed that all transmitted light is detected. In some circumstances, however, diffraction can lead to some of the transmitted light being scattered into angles high enough that the light misses the collection lens.

The lower value of T min results in an underestimation of the beam waist [see Eq. A sketch of the optical setup is shown in Fig. We couple 0. Figure 4 shows a typical oscilloscope trace of a wire of 1 mm diameter being manually moved through the beam. As described in the previous section, we can then calculate the Gaussian beam diameter using Eq.

The diameter of the wire is expected to have a considerable impact on the ease and precision of the measurement. In order to assess the effect of the wire diameter D on the accuracy of the measurement, we measured one and the same beam using different wires of various diameters. The results for the comparison of this measurement to the wire-based method are shown in the upper part of Table 1. Table 1. The results are summarized in the lower part of Table 1. The numbers in brackets are the standard deviations of the experimental points in units of the last digit, except for the fourth column, where the wire diameter was measured with digital calipers and the brackets indicate the stated accuracy of the device.

We presented a simple and precise method to accurately measure the width of Gaussian beams, Airy spots, and Bessel beams to very high accuracy as verified experimentally for Gaussian beams. The method works for a very large range of beam widths from a few micrometers with basically no upper limit on the beam size. The lower limit on the beam size is set in practice by the availability of an appropriate size thin wire.

Our measurements are in good agreement with the comparative measurements performed with a commercial slit-based beam profiler and the knife-edge technique. Finally, the proposed technique is fully scalable and can be used in confined spaces where a beam profiler cannot be placed or for cases in which the beam width is larger than the beam-profiler aperture.

The simplicity of the proposed technique, which requires only instruments readily available in any optics laboratory, combined with its accuracy and repeatability make it a very interesting, low-cost alternative to the standard beam-profiling techniques.

We thank the four anonymous referees, whose comments have triggered us to widen the scope of the paper considerably. The author s would like to acknowledge networking support by AtomQT. Ruff and A. Tiwari, S. Ram, J. Jayabalan, and S. Suzaki and A. Schneider and W. Khosrofian and B.

Talib and W. Laser Appl. Courtney and W. Kimura and C.



0コメント

  • 1000 / 1000