<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Hybridizer - In Fine - Le Blog</title>
	<atom:link href="https://blog.infine.com/category/hybridizer/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.infine.com</link>
	<description>Le blog des technos de demain !</description>
	<lastBuildDate>Mon, 11 Oct 2021 13:11:47 +0000</lastBuildDate>
	<language>fr-FR</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.5.7</generator>

 
	<item>
		<title>From C# to SIMD : Numerics.Vector and Hybridizer</title>
		<link>https://blog.infine.com/from-c-to-simd-numerics-vector-and-hybridizer-3339?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=from-c-to-simd-numerics-vector-and-hybridizer</link>
					<comments>https://blog.infine.com/from-c-to-simd-numerics-vector-and-hybridizer-3339#respond</comments>
		
		<dc:creator><![CDATA[Florent]]></dc:creator>
		<pubDate>Wed, 06 Oct 2021 08:49:00 +0000</pubDate>
				<category><![CDATA[C#]]></category>
		<category><![CDATA[Hybridizer]]></category>
		<guid isPermaLink="false">https://blog.infine.com/?p=3339</guid>

					<description><![CDATA[<p><span class="rt-reading-time" style="display: block;"><span class="rt-label rt-prefix">Temps de lecture : </span> <span class="rt-time">5</span> <span class="rt-label rt-postfix">min.</span></span> System.Numerics.Vector&#160;is a library provided by .Net (as a nuget package), which tries to leverage SIMD instruction on target hardware. It exposes a few value types, such as&#160;Vector&#60;T&#62;, which are recognized by&#160;RyuJIT&#160;as intrinsics.Supported intrinsics are listed in the&#160;core-clr github repository.This allows C# SIMD acceleration, as long as code is modified to use these intrinsic types, instead &#8230;</p>
<p>The post <a href="https://blog.infine.com/from-c-to-simd-numerics-vector-and-hybridizer-3339">From C# to SIMD : Numerics.Vector and Hybridizer</a> first appeared on <a href="https://blog.infine.com">In Fine - Le Blog</a>.</p>]]></description>
										<content:encoded><![CDATA[<span class="rt-reading-time" style="display: block;"><span class="rt-label rt-prefix">Temps de lecture : </span> <span class="rt-time">5</span> <span class="rt-label rt-postfix">min.</span></span>
<p><a href="https://msdn.microsoft.com/en-us/library/dn858218(v=vs.111).aspx" target="_blank" rel="noopener">System.Numerics.Vector</a>&nbsp;is a library provided by .Net (as a nuget package), which tries to leverage SIMD instruction on target hardware. It exposes a few value types, such as&nbsp;<code>Vector&lt;T&gt;</code>, which are recognized by&nbsp;<a href="https://blogs.msdn.microsoft.com/dotnet/2013/09/30/ryujit-the-next-generation-jit-compiler-for-net/" target="_blank" rel="noopener">RyuJIT</a>&nbsp;as intrinsics.<br>Supported intrinsics are listed in the&nbsp;<a href="https://raw.githubusercontent.com/dotnet/coreclr/master/src/jit/simdintrinsiclist.h" target="_blank" rel="noopener">core-clr github repository</a>.<br>This allows C# SIMD acceleration, as long as code is modified to use these intrinsic types, instead of scalar floating point elements.</p>



<p>On the other hand, Hybridizer aims to provide those benefits without being intrusive in the code (only metadata is required).</p>



<p>We naturally wanted to test if System.Numerics.Vector delivers good performance, compared to Hybridizer.</p>



<figure class="wp-block-table"><table><tbody><tr><td><strong>Summary</strong><br>We measured that Numerics.Vector provides good speed-up over C# code as long as no transcendental function is involved (such as Math.Exp), but still lags behind Hybridizer. Because of the lack of some operators and mathematical functions, Numerics can also generate really slow code (when AVX pipeline is broken). In addition, code modification is a heavy process, and can’t easily be rolled back.</td></tr></tbody></table></figure>



<p>We wrote and ran two benchmarks, and for each of them we have four versions:</p>



<ul><li>Simple C# scalar code</li><li>Numerics.Vector</li><li>Simple C# scalar code, hybridized</li><li>Numerics.Vector, hybridized</li></ul>



<p>Processor is a&nbsp;<a href="http://ark.intel.com/products/75124/Intel-Core-i7-4770S-Processor-8M-Cache-up-to-3_90-GHz" target="_blank" rel="noopener">core i7-4770S @ 3.1GHz</a>&nbsp;(max measured turbo in AVX mode being 3.5GHz). Peak flops is 224 GFlop/s, or 112 GCFlop/s, if we count&nbsp;<a href="https://en.wikipedia.org/wiki/Multiply%E2%80%93accumulate_operation" target="_blank" rel="noopener">FMA&nbsp;</a>as one (since our processor supports it).</p>



<h2 class="wp-block-heading">Compute bound benchmark</h2>



<p>This is a compute-intensive benchmark. For each element of a large double precision array (8 millions elements: 67MBytes), we iterate twelve times the computation of an exponential’s Taylor expansion (expm1). This is largely enough to enter the compute-bound world, by hiding memory operations latency behind a full bunch of floatin point operations.<br>Scalar code is simply:</p>





<pre class="wp-block-code"><code lang="csharp" class="language-csharp"> [MethodImpl(MethodImplOptions.AggressiveInlining)]
 public static double expm1(double x)
 {
   return ((((((((((((((15.0 + x)
     * x + 210.0)
     * x + 2730.0)
     * x + 32760.0)
     * x + 360360.0)
     * x + 3603600.0)
     * x + 32432400.0)
     * x + 259459200.0)
     * x + 1816214400.0)
     * x + 10897286400.0)
     * x + 54486432000.0)
     * x + 217945728000.0)
     * x + 653837184000.0)
     * x + 1307674368000.0)
     * x * 7.6471637318198164759011319857881e-13;
 }
 [MethodImpl(MethodImplOptions.AggressiveInlining)]
 public static double twelve(double x)
 {
   return expm1(expm1(expm1(expm1(expm1(expm1(expm1(expm1(expm1(expm1(expm1(x)))))))))));
 } </code></pre>



<p>on which we added the&nbsp;<a href="https://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.methodimploptions(v=vs.110).aspx" target="_blank" rel="noopener">AggressiveInlining&nbsp;</a>attribute to help RyuJit to merge operations at JIT time.</p>



<p>The Numerics.Vector version of the code is quite the same:</p>



<pre class="wp-block-code"><code lang="csharp" class="language-csharp"> [MethodImpl(MethodImplOptions.AggressiveInlining)]
 public static Vector&lt;double> expm1(Vector&lt;double> x)
 {
   return ((((((((((((((new Vector&lt;double>(15.0) + x)
     * x + new Vector&lt;double>(210.0))
     * x + new Vector&lt;double>(2730.0))
     * x + new Vector&lt;double>(32760.0))
     * x + new Vector&lt;double>(360360.0))
     * x + new Vector&lt;double>(3603600.0))
     * x + new Vector&lt;double>(32432400.0))
     * x + new Vector&lt;double>(259459200.0))
     * x + new Vector&lt;double>(1816214400.0))
     * x + new Vector&lt;double>(10897286400.0))
     * x + new Vector&lt;double>(54486432000.0))
     * x + new Vector&lt;double>(217945728000.0))
     * x + new Vector&lt;double>(653837184000.0))
     * x + new Vector&lt;double>(1307674368000.0))
     * x * new Vector&lt;double>(7.6471637318198164759011319857881e-13);
} </code></pre>



<p>The four versions of this code give the following performance results:</p>



<figure class="wp-block-table"><table><tbody><tr><td>Flavor</td><td>Scalar C#</td><td>Vector C#</td><td>Vector Hyb</td><td>Scalar Hyb</td></tr><tr><td>GCFlop/s</td><td>4.31</td><td>19.95</td><td>41.29</td><td>59.65</td></tr></tbody></table></figure>



<div class="wp-block-image"><figure class="aligncenter"><a href="http://www.altimesh.com/wp-content/uploads/2017/06/expm1-numerics-vector-speedup.png" class="fancyboxgroup" rel="gallery-3339"><img decoding="async" src="http://hybridizer.io/wp-content/uploads/2017/06/expm1-numerics-vector-speedup.png" alt="" class="wp-image-237"/></a></figure></div>



<p>As stated, Numerics.Vector delivers a close to 4x speedup from scalar. However, performance is far from what we reach with the Hybridizer. If we look at generated assembly, it’s quite clear why:</p>



<pre class="wp-block-code"><code lang="c" class="language-c"> vbroadcastsd ymm0,mmword ptr [7FF7C2255B48h]
 vbroadcastsd ymm1,mmword ptr [7FF7C2255B50h]
 vbroadcastsd ymm2,mmword ptr [7FF7C2255B58h]
 vbroadcastsd ymm3,mmword ptr [7FF7C2255B60h]
 vbroadcastsd ymm4,mmword ptr [7FF7C2255B68h]
 vbroadcastsd ymm5,mmword ptr [7FF7C2255B70h]
 vbroadcastsd ymm7,mmword ptr [7FF7C2255B78h]
 vbroadcastsd ymm8,mmword ptr [7FF7C2255B80h]
 vaddpd ymm0,ymm0,ymm6 
 vmulpd ymm0,ymm0,ymm6
 vaddpd ymm0,ymm0,ymm1
 vmulpd ymm0,ymm0,ymm6
 vaddpd ymm0,ymm0,ymm2
 vmulpd ymm0,ymm0,ymm6
 vaddpd ymm0,ymm0,ymm3
 vmulpd ymm0,ymm0,ymm6
 vaddpd ymm0,ymm0,ymm4
 vmulpd ymm0,ymm0,ymm6
 vaddpd ymm0,ymm0,ymm5
 vmulpd ymm0,ymm0,ymm6
 vaddpd ymm0,ymm0,ymm7
 vmulpd ymm0,ymm0,ymm6
 vaddpd ymm0,ymm0,ymm8
 vmulpd ymm0,ymm0,ymm6
 ; repeated </code></pre>



<p>Fused multiply add are not reconstructed, and constant operands are reloaded from constant pool at each expm1 invokation. This leads to high registry pressure (for constants), where memory operands could save some.</p>



<p>Here is what the Hybridizer generates from scalar code:</p>



<pre class="wp-block-code"><code lang="csharp" class="language-csharp"> vaddpd ymm1,ymm0,ymmword ptr []
 vfmadd213pd ymm1,ymm0,ymmword ptr []
 vfmadd213pd ymm1,ymm0,ymmword ptr []
 vfmadd213pd ymm1,ymm0,ymmword ptr []
 vfmadd213pd ymm1,ymm0,ymmword ptr []
 vfmadd213pd ymm1,ymm0,ymmword ptr []
 vfmadd213pd ymm1,ymm0,ymmword ptr []
 vfmadd213pd ymm1,ymm0,ymmword ptr []
 vfmadd213pd ymm1,ymm0,ymmword ptr []
 vfmadd213pd ymm1,ymm0,ymmword ptr []
 vfmadd213pd ymm1,ymm0,ymmword ptr []
 vfmadd213pd ymm1,ymm0,ymmword ptr []
 vfmadd213pd ymm1,ymm0,ymmword ptr []
 vfmadd213pd ymm1,ymm0,ymmword ptr []
 vmulpd ymm0,ymm0,ymm1&lt;br /&gt;
 vmulpd ymm0,ymm0,ymmword ptr []
 vmovapd ymmword ptr [rsp+0A20h],ymm0
 ; repeated </code></pre>



<p><br>This reconstructs fused multiply-add, and leverages memory operands to save registers.</p>



<p>Why are we not to peak performance (112GCFlops)? That is because Haswell has two pipelines for FMA, and a latency of 5 (see&nbsp;<a href="https://software.intel.com/sites/landingpage/IntrinsicsGuide/#techs=FMA&amp;expand=2595,2381" target="_blank" rel="noopener">intel intrinsic guide</a>. To reach peak performance, we would need to interleave 2 independant FMA instruction at each cycle. This could be done by reordering instructions, since&nbsp;<a href="http://www.anandtech.com/show/6355/intels-haswell-architecture/8" target="_blank" rel="noopener">reorder buffer</a>&nbsp;is not long enough to execute instructions too far in the pipeline. LLVM, our backend compiler, is not capable of such reordering. To get better performance, we unfortunately have to write assembly by hand (which is not exactly what a C# programmer expects to do in the morning).</p>



<h2 class="wp-block-heading">Invoke transcendentals</h2>



<p>In this second benchmark, we need to compute the exponential of all the components of a vector. To do that, we invoke&nbsp;<a href="https://msdn.microsoft.com/en-us/library/system.math.exp(v=vs.110).aspx" target="_blank" rel="noopener">Math.Exp</a>.<br>Scalar code is:</p>





<pre class="wp-block-code"><code lang="csharp" class="language-csharp"> [EntryPoint]
 public static void Apply_scal(double[] d, double[] a, double[] b, double[] c, int start, int stop)
 {
   int sstart = start + threadIdx.x + blockDim.x * blockIdx.x;
   int step = blockDim.x * gridDim.x;
   for (int i = sstart; i &lt; stop; i += step)
   {
     d[i] = a[i] * Math.Exp(b[i]) * Math.Exp(c[i]);
   }
 } </code></pre>



<p><br>This function is later called in a&nbsp;<code>Parallel.For</code>&nbsp;construct.</p>



<p>However, Numerics.Vector does not provide a vectorized exponential function. Therefore, we have to write our own:</p>



<pre class="wp-block-code"><code lang="csharp" class="language-csharp"> [IntrinsicFunction("hybridizer::exp")]
 [MethodImpl(MethodImplOptions.AggressiveInlining)]
 public static Vector&lt;double> Exp(Vector&lt;double> x)
 {
   double[] tmp = new double[Vector&lt;double>.Count];
   for(int k = 0; k &lt; Vector&lt;double>.Count; ++k)
   {
     tmp[k] = Math.Exp(x[k]);
   }
   return new Vector&lt;double>(tmp);
 } </code></pre>



<p>As a glance, we can see the problems: each exponential will first break the AVX context (<a href="https://software.intel.com/en-us/articles/avoiding-avx-sse-transition-penalties">which cost tens of cycles</a>), and trigger 4 function calls instead of one.</p>



<p>With no surprise, this code performs really badly:</p>



<figure class="wp-block-table"><table><tbody><tr><td>Flavor</td><td>Scalar C#</td><td>Vector C#</td><td>Vector Hyb</td><td>Scalar Hyb</td></tr><tr><td>GB/s</td><td>13.42</td><td>1.80</td><td>14.91</td><td>14.13</td></tr></tbody></table></figure>



<div class="wp-block-image"><figure class="aligncenter"><a href="http://www.altimesh.com/wp-content/uploads/2017/06/bandwidth-numerics-vector-speedup.png" class="fancyboxgroup" rel="gallery-3339"><img decoding="async" src="http://hybridizer.io/wp-content/uploads/2017/06/bandwidth-numerics-vector-speedup.png" alt="" class="wp-image-242"/></a></figure></div>



<p>If we look at the generated assembly, it confirms what we suspected (context switched, and ymm register splitting):</p>



<pre class="wp-block-code"><code class=""> vextractf128 xmm9,ymm6,1
 vextractf128 xmm10,ymm7,1
 vextractf128 xmm11,ymm8,1
 call 00007FF8127C6B80 // exp
 vinsertf128 ymm8,ymm8,xmm11,1
 vinsertf128 ymm7,ymm7,xmm10,1
 vinsertf128 ymm6,ymm6,xmm9,1 </code></pre>



<h2 class="wp-block-heading">Branching</h2>



<p>Branch are expressed using&nbsp;<code>if</code>&nbsp;or ternary operators in scalar code. However, those are not available in Numerics.Vector, since the code is manually vectorized.<br>Branches must be expressed using&nbsp;<code>ConditionalSelect</code>, which leads to code:</p>



<pre class="wp-block-code"><code lang="csharp" class="language-csharp"> public static Vector&lt;double> func(Vector&lt;double> x)
 {
   Vector&lt;long> mask = Vector.GreaterThan(x, one);
   Vector&lt;double> result = Vector.ConditionalSelect(mask, x, one);
   return result;
 } </code></pre>



<p>As we can see, expressing conditions with Numerics.Vector is not intuitive, intrusive, and bug prone. It’s actually the same as writing AVX compiler intrinsics in C++. On the other hand, Hybridizer supports conditions, which allow you to write the above code this way:</p>



<pre class="wp-block-code"><code lang="csharp" class="language-csharp"> [Kernel]
 public static double func(double x)
 {
   if (x > 1.0)
     return x;
   return 1.0;
 } </code></pre>



<h2 class="wp-block-heading">Conclusion</h2>



<p>Numerics.Vector gives easily reasonable performances on simple code (no branches, no function calls). Speed-up is what we expect (vector unit width) on simple code. However, it’s time-consuming and error-prone to express conditions, and performance is completely broken as soon as some Jitter Intrinsic is missing (such as exponential).</p>



<p></p>



<p class="has-cyan-bluish-gray-color has-text-color"></p><p>The post <a href="https://blog.infine.com/from-c-to-simd-numerics-vector-and-hybridizer-3339">From C# to SIMD : Numerics.Vector and Hybridizer</a> first appeared on <a href="https://blog.infine.com">In Fine - Le Blog</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.infine.com/from-c-to-simd-numerics-vector-and-hybridizer-3339/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
