https://daveaglick.com/
Copyright © 2023
2023-03-13T16:49:24Z
The personal blog of Dave Glick
https://daveaglick.com/posts/default-interface-members-what-are-they-good-for
Default Interface Members, What Are They Good For?
2019-09-12T00:00:00Z
<p><a href="https://daveaglick.com/posts/default-interface-members-and-inheritance">In my last post</a> I promised to look at some of the use cases where I think it's worthwhile to consider using default interface members. They're certainly not going to replace many existing conventions, but I have found some situations where targetted use can lead to cleaner, more maintainable code (at least in my own opinion).</p>
<h1 id="update-interfaces-without-breaking">Update Interfaces Without Breaking</h1>
<p><a href="https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/default-interface-members-versions">The docs</a> say:</p>
<blockquote>
<p>The most common scenario is to safely add members to an interface already released and used by innumerable clients.</p>
</blockquote>
<p>The problem this solves is that if add a new member to an interface, every type that implements that interface will need to provide an implementation for that member. This may not be such a big deal if the interface is in your own code but as with any breaking change, in a library released to the public or other teams it can create a lot of headaches.</p>
<p>Consider the example from my previous post:</p>
<p>Consider the following code:</p>
<pre><code class="language-csharp">interface ICar
{
string Make { get; }
}
public class Avalon : ICar
{
public string Make => "Toyota";
}
</code></pre>
<p>If I wanted to add a new <code>GetTopSpeed()</code> method to the interface, I'd need to then implement it in the <code>Avalon</code> class:</p>
<pre><code class="language-csharp">interface ICar
{
string Make { get; }
int GetTopSpeed();
}
public class Avalon : ICar
{
public string Make => "Toyota";
public int GetTopSpeed() => 130;
}
</code></pre>
<p>However, if I create a default implementation of the new <code>GetTopSpeed()</code> method in <code>ICar</code> I don't need to add it to every implementing class:</p>
<pre><code class="language-csharp">interface ICar
{
string Make { get; }
public int GetTopSpeed() => 150;
}
public class Avalon : ICar
{
public string Make => "Toyota";
}
</code></pre>
<p>In addition, I can still provide override implementations for classes where the default isn't appropriate:</p>
<pre><code class="language-csharp">interface ICar
{
string Make { get; }
public int GetTopSpeed() => 150;
}
public class Avalon : ICar
{
public string Make => "Toyota";
public int GetTopSpeed() => 130;
}
</code></pre>
<p>One important note though is that as I mentioned in my previous post, the default <code>GetTopSpeed()</code> method will only be available on variables of type <code>ICar</code> and not <code>Avalon</code> if you don't also provide an override implementation in the class. That means this technique is primarily useful only if you pass around interface types and not implementing types (otherwise you'll end up with a bunch of casts to the interface type in order to get access to the default member implementations).</p>
<h1 id="mixins-and-traits-sort-of">Mixins and Traits (Sort Of)</h1>
<p><a href="https://en.wikipedia.org/wiki/Mixin">Mixins</a> and the similar language concept of <a href="https://en.wikipedia.org/wiki/Trait_(computer_programming)">traits</a> both describe ways of extending the behavior of an object through composition without resorting to multiple inheritance.</p>
<p><a href="https://en.wikipedia.org/wiki/Mixin">The Wikipedia article on mixins</a> says:</p>
<blockquote>
<p>A mixin can also be viewed as an interface with implemented methods.</p>
</blockquote>
<p>Sound familiar?</p>
<p>Interfaces in C# that contain default implementations aren't exactly mixins because they can also contain unimplemented members, support interface inheritance, can be specialized, etc. However, if we make an interface that just contains default members we have a mostly traditional mixin.</p>
<p>Consider the following code that adds functionality for "moving" an object and tracking it's location (for example, in a game environment):</p>
<pre><code class="language-csharp">public interface IMovable
{
public (int, int) Location { get; set; }
public int Angle { get; set; }
public int Speed { get; set; }
// A method that changes location
// using angle and speed
public void Move() => Location = ...;
}
public class Car : IMovable
{
public string Make => "Toyota";
}
</code></pre>
<p>Whops! There's a problem with this code that I hadn't considered until I wrote it for the post and tried to compile it. Interfaces (even ones with default implementations) can't contain state. Therefore auto-implemented properties aren't supported by default interface members. From the <a href="https://github.com/dotnet/csharplang/blob/master/proposals/csharp-8.0/default-interface-methods.md#detailed-design">design document for default interface members</a>:</p>
<blockquote>
<p>Interfaces may not contain instance state. While static fields are now permitted instance fields are not permitted in interfaces. Instance auto-properties are not supported in interfaces, as they would implicitly declare a hidden field.</p>
</blockquote>
<p>This is where default interface members and the concept of mixins start to diverge a bit (mixins can conceptually contain state as far as I understand them), but we can still accomplish the original goal:</p>
<pre><code class="language-csharp">public interface IMovable
{
public (int, int) Location { get; set; }
public int Angle { get; set; }
public int Speed { get; set; }
// A method that changes location
// using angle and speed
public void Move() => Location = ...;
}
public class Car : IMovable
{
public string Make => "Toyota";
// IMovable
public (int, int) Location { get; set; }
public int Angle { get; set; }
public int Speed { get; set; }
}
</code></pre>
<p>This accomplishes the original goal by making the common <code>Move()</code> method and it's implementation available to all classes that apply the interface. Sure, the class still needs to provide implementations for the properties, but the way that they're at least declared in the <code>IMovable</code> interface means the default members in that interface can operate on the those properties and guarantees any class applying the interface will expose the correct state.</p>
<p>As a more complete and practical example, consider a logging mixin:</p>
<pre><code class="language-csharp">public interface ILogger
{
public void LogInfo(string message) =>
LoggerFactory
.GetLogger(this.GetType().Name)
.LogInfo(message);
}
public static class LoggerFactory
{
public static ILogger GetLogger(string name) =>
new ConsoleLogger(name);
}
public class ConsoleLogger : ILogger
{
private readonly string _name;
public ConsoleLogger(string name)
{
_name = name
?? throw new ArgumentNullException(nameof(name));
}
public void LogInfo(string message) =>
Console.WriteLine($"[INFO] {_name}: {message}");
}
</code></pre>
<p>I could then apply the <code>ILogger</code> interface to any class:</p>
<pre><code class="language-csharp">public class Foo : ILogger
{
public void DoSomething()
{
((ILogger)this).LogInfo("Woot!");
}
}
</code></pre>
<p>And code like:</p>
<pre><code class="language-csharp">Foo foo = new Foo();
foo.DoSomething();
</code></pre>
<p>Would produce:</p>
<pre><code>[INFO] Foo: Woot!
</code></pre>
<h1 id="replacing-extension-methods">Replacing Extension Methods</h1>
<p>The biggest area of utility I've found so far is replacing large sets of extension methods. Let's go back to a simple logging example:</p>
<pre><code class="language-csharp">public interface ILogger
{
void Log(string level, string message);
}
</code></pre>
<p>Before default interface members I would typically implement a bunch of extensions to this logging interface so that the logger implementation would only have to implement a single method but users could call a variety of overloads:</p>
<pre><code class="language-csharp">public static class ILoggerExtensions
{
public static void LogInfo(this ILogger logger, string message) =>
logger.Log("INFO", message);
public static void LogInfo(this ILogger logger, int id, string message) =>
logger.Log("INFO", $"[{id}] message");
public static void LogError(this ILogger logger, string message) =>
logger.Log("ERROR", message);
public static void LogError(this ILogger logger, int id, string message) =>
logger.Log("ERROR", $"[{id}] {message}");
public static void LogError(this ILogger logger, Exception ex) =>
logger.Log("ERROR", ex.Message);
public static void LogError(this ILogger logger, int id, Exception ex) =>
logger.Log("ERROR", $"[{id}] {ex.Message}");
}
</code></pre>
<p>That's fine, and works. But it has a few shortfalls. For one, the namespaces of the static extension class and the interface may not necessarily match. It also creates some noise by requiring the <code>this ILogger logger</code> parameter and referring to a <code>logger</code> instance.</p>
<p>What I've started doing with big sets of extensions is making them default interface members instead:</p>
<pre><code class="language-csharp">public interface ILogger
{
void Log(string level, string message);
public void LogInfo(string message) =>
Log("INFO", message);
public void LogInfo(int id, string message) =>
Log("INFO", $"[{id}] message");
public void LogError(string message) =>
Log("ERROR", message);
public void LogError(int id, string message) =>
Log("ERROR", $"[{id}] {message}");
public void LogError(Exception ex) =>
Log("ERROR", ex.Message);
public void LogError(int id, Exception ex) =>
Log("ERROR", $"[{id}] {ex.Message}");
}
</code></pre>
<p>I find those implementation much cleaner and easier to read (and thus maintain). Using default interface members also presents some other benefits where extensions might otherwise have been used:</p>
<ul>
<li>They're in the scope of the instance and <code>this</code> can be used.</li>
<li>Other types of members like indexers can be provided.</li>
<li>They can be overridden by implementing classes to specialize the behavior.</li>
</ul>
<p>Something that bugs me about the code above though is that it's not totally clear what the required, unimplemented contract of the interface is and what's implemented by default. A comment separating the two sections might help but I do like how extension classes are explicit in this regard.</p>
<p>To address that, I've starting making any interface that contains default members partial (other than one or two trivial ones). Then I put the default members in other files with the naming convention "ILogger.LogInfoDefaults.cs" and "ILogger.LogErrorDefaults.cs", etc. If I only have a small set of default members that don't suggest any sort of grouping, I name the file "ILogger.Defaults.cs".</p>
<p>This separates the default member implementations from the unimplemented contract that any implementing class will have to provide implementations for. It also helps break up what could become a very long file. There's even a neat trick to enable ASP.NET-style Visual Studio file nesting in any project format. Add this to your project file or <code>Directory.Build.props</code>:</p>
<pre><code class="language-xml"><ItemGroup>
<ProjectCapability Include="DynamicDependentFile"/>
<ProjectCapability Include="DynamicFileNesting"/>
</ItemGroup>
</code></pre>
<p>Then you can select "File Nesting" in the Solution Explorer and all your <code>.Defaults.cs</code> files will appear as children of the main interface file.</p>
<p>Finally, there are still some situations where extension methods are preferred:</p>
<ul>
<li>If you typically pass around class types instead of the interface type (because you'd have to cast to the interface to access the default member implementations).</li>
<li>If you often use the pattern <code>public static T SomeExt<T>(this T foo)</code> to return the exact type the extension was called for (for example, in a fluent API).</li>
</ul>
The personal blog of Dave Glick
https://daveaglick.com/posts/default-interface-members-and-inheritance
Default Interface Members and Inheritance
2019-09-06T00:00:00Z
<p>Default interface members (or "DIM" as I've seen the feature called) is a new language feature available in C# 8 that lets you define implementations directly in an interface. I started out with the intent of writing about use cases for the feature, but ended up writing so much that I decided to split the post in two. This part deals with how default interface members need to be invoked and the differences in semantics between class inheritance and default interface member implementation.</p>
<h1 id="must-invoke-from-the-interface">Must Invoke From The Interface</h1>
<p>Consider the following code:</p>
<pre><code class="language-csharp">interface ICar
{
// Seems like a reasonable default
public int GetTopSpeed() => 150;
}
public class Elantra : ICar
{
}
</code></pre>
<p>This defines an interface <code>ICar</code> with a method <code>GetTopSpeed()</code> and that method has a default implementation. You might think you could then write:</p>
<pre><code class="language-csharp">Elantra e = new Elantra();
e.GetTopSpeed();
</code></pre>
<p>But that won't compile. You have to invoke default interface members from an instance of the interface (unless they've been redefined, more on that in a minute):</p>
<pre><code class="language-csharp">Elantra e = new Elantra();
((ICar)e).GetTopSpeed();
</code></pre>
<p>At this point you might be thinking "well that seems silly," but there's a good reason why default interface members behave this way. Consider the following:</p>
<pre><code class="language-csharp">interface ICar
{
// Seems like a reasonable default
public int GetTopSpeed() => 150;
}
interface IMovable
{
// Nothing moves faster than the speed of light
public int GetTopSpeed() => 671000000;
}
public class Elantra : ICar, IMovable
{
}
</code></pre>
<p>If you called <code>GetTopSpeed()</code> on an instance of <code>Elantra</code> what would the result be? Are you actually calling <code>ICar.GetTopSpeed()</code> or <code>IMovable.GetTopSpeed()</code>? This problem (often referred to as "diamond inheritance") is one of the reasons true multiple inheritance is so difficult to do well in a language like C++. To avoid it, the C# language team explicitly elected <em>not</em> to make default interface members a mechanism to achieve multiple inheritance. Instead you have to be explicit about which implementation you're calling to remove all ambiguity.</p>
<h1 id="default-implementations-vs.inheritance">Default Implementations vs. Inheritance</h1>
<p><a href="https://twitter.com/daveaglick/status/1169777331608707075">Something that initially confused me</a> was the relationship between default interface members and the way members are inherited in a traditional class hierarchy. Consider this code:</p>
<pre><code class="language-csharp">interface ICar
{
public string Make { get; }
public int Cylinders => 4;
}
public abstract class Toyota : ICar
{
public string Make => "Toyota";
}
public class Avalon : Toyota
{
public int Cylinders => 6;
}
</code></pre>
<p>What would you expect this code to output:</p>
<pre><code class="language-csharp">ICar car = new Avalon();
Console.WriteLine(car.Cylinders);
</code></pre>
<p>My initial reaction was that this should output <code>6</code>, but it actually outputs <code>4</code>.</p>
<?# giphy g01ZnwAUvutuK8GIQn /?>
<p>The reason is because <code>Avalon.Cylinders</code> isn't actually implementing <code>ICar.Cylinders</code> given that the interface is implicit via the base <code>Toyota</code> class. They're two totally different properties.</p>
<p><a href="https://twitter.com/ben_a_adams/status/1169790052425240581">Ben Adams was the first</a> of many to point out that this behavior isn't actually different from the way interfaces currently work. The code above is essentially equivalent to writing the following, which will also output <code>4</code> instead of <code>6</code>:</p>
<pre><code class="language-csharp">interface ICar
{
public string Make { get; }
public int Cylinders { get; }
}
public abstract class Toyota : ICar
{
public string Make => "Toyota";
int ICar.Cylinders => 4;
}
public class Avalon : Toyota
{
public int Cylinders => 6;
}
</code></pre>
<p>I envision this being something I'll have to keep reminding myself about. I think the reason is that the semantics are different from what we're used to after a decade of working with <code>virtual</code> and <code>override</code> in class hierarchies.</p>
<p>More specifically, up until default interface members we <em>had</em> to provide an implementation within an implementing class because the interface simply couldn't contain one. That means in the code above for the abstract <code>Toyota</code> base class I would've had to write one of these:</p>
<ul>
<li><code>public int Cylinders => 4</code> to implement the interface property and provide a default value, forcing the property into the inheritance chain of <code>Toyota</code>.</li>
<li><code>public abstract int Cylinders { get; }</code> to define the interface property as abstract and force derived classes to provide an implementation.</li>
<li><code>int ICar.Cylinders => 4</code> to implement the interface property and provide a default value, but not place the property into the inheritance chain of <code>Toyota</code>.</li>
</ul>
<p>I've come to think of that last syntax as "opting-out" of class inheritance. I have to have <em>something</em> that implements the interface property (because it's not implemented in the interface) and I have to use a special syntax that makes it very clear I'm implementing the property at the interface and not the class level if that's my intent. <strong>If you don't want the property to be a part of the class inheritance hierarchy you have to opt-out</strong>.</p>
<p>Contrast that with the semantics of a default interface member. The equivalent <code>int ICar.Cylinders => 4</code> definition never has to show up in the implementing <code>Toyota</code> class since the default property implementation was provided directly in the interface. In this case <code>Cylinders</code> has an implementation from the interface so you're not forced to put <em>anything</em> in the <code>Toyota</code> class about it. That property does not belong to the class in this case. <strong>If you want the property to be a part of the class inheritance hierarchy you have to opt-in</strong>:</p>
<pre><code class="language-csharp">interface ICar
{
public string Make { get; }
public int Cylinders => 4;
}
public abstract class Toyota : ICar
{
public string Make => "Toyota";
public virtual int Cylinders =>
((ICar)this).Cylinders;
}
public class Avalon : Toyota
{
public override int Cylinders => 6;
}
</code></pre>
<p>This code will output the expected <code>6</code> because we "opted-in" to implementing the <code>Cylinders</code> property in the <code>Toyota</code> class instead of leaving the implementation in the interface. The <code>Toyota</code> class only invokes the implementation from the interface, but by doing so we've placed the property implementation into the class inheritance hierarchy and can now rely on the <code>virtual</code> and <code>override</code> behavior we know.</p>
<p>One final note: the <code>((ICar)this).Cylinders</code> syntax in the class implementation that calls the default interface implementation is awkward. <a href="https://github.com/dotnet/csharplang/issues/406">There's an open issue</a> to add support for <code>base(ICar).Cylinders</code> syntax, but it requires changes to the CLR so <a href="https://github.com/dotnet/csharplang/blob/master/meetings/2019/LDM-2019-04-29.md#conclusion">it got pushed to a later language version</a>.</p>
<p><strong>Update: don't use the code above!</strong> If you do, you're asking for trouble. It occurred to me after writing the post, and was pointed out by a few folks on Twitter, that the pattern above with a call to <code>((ICar)this).Cylinders</code> will <em>only</em> work if <code>Cylinders</code> is implemented in a derived class. In that case the call invokes the derived implementation and you're fine. If it's not implemented in a derived class though, BOOM! You'll end up with a stack overflow because the method will invoke itself recursively. I'm leaving the example here for educational purposes. This example illustrates why we really need the <code>base(ICar)</code> feature to handle bridging default interface members and class inheritance hierarchies.</p>
The personal blog of Dave Glick
https://daveaglick.com/posts/some-thoughts-on-feelings-in-open-source
Some Thoughts On Feelings In Open Source
2019-03-11T00:00:00Z
<p>It's been a while since I've posted anything about open source communities, but that doesn't mean I haven't continued to think about them. It's an issue that's near and dear to me and I spend a lot of time considering different aspects of open source. I'd like to take a moment to talk about one of those in particular: feelings. More specifically, why they matter in open source and some ideas on how best to incorporate them into our open source interactions.</p>
<p>I want to be clear: while the Twitter exchange was essentially subtweeting a particular blog post, the rest of what I write below is more general. I don't want there to be confusion about what the blog post in question does or doesn't say, or how it does or doesn't present it's arguments in contrast with what I write below. I think this issue is more general than that and so I'm not going to link to the other blog post to avoid distraction.</p>
<p>In the Twitter thread I used an analogy of building a bird house. If I want to assemble a simple bird house by joining sides together with nails, I have a whole toolbox at my disposal. I could use a screwdriver and drive the nails with the handle. I could pull out a rubber mallet and use that. Or I could select a hammer and get the job done efficiently and quickly. It seems obvious that everyone should choose the hammer. But what if I really like screwdrivers? What if I've never even seen a hammer but I'm really good at driving nails with a handle. Maybe I simply don't feel like learning about hammers right now?</p>
<p>This brings us to the first point I was trying to make: <strong>that people choose tools for all sorts of reasons and there's never really a "best for everyone" option</strong>. A vibrant open source community should have lots of tools that could drive those nails, and we should celebrate and encourage that diversity because you never know when a nail is just the right shape that it can only be driven by a screwdriver handle. In my opinion it's far more valuable to talk about why hammers make a good choice for driving nails than why screwdrivers do not. That gives the consumer room to weigh their own use case against the stated benefits of all their options and pick the best one <em>for them</em>. It might seem like saying "screwdrivers are terrible for this because..." is valuable, and maybe it is (more on that below), but without actionable information about what else I could use it's kind of moot point. And if I'm going to be talking about why a hammer is such a great tool for this job in the first place, is it really all that important to mention the screwdriver?</p>
<p>This is where the birdhouse analogy starts to fall apart in the context of open source. Open source tools and libraries are not physical items in a toolbox. They're not manufactured by faceless tool companies for the purposes of monetary profit. And this brings us to my second, and more important point: <strong>open source is (mostly) created and consumed by individuals and we need to be mindful of the inherent humanity in such a system</strong>.</p>
<p>I can relate to the idea that as software craftspeople we need to put feelings aside and be analytical about our choices, both for ourselves and for our communities. After all, that's how we've been trained to write software in the first place. If a particular convention, methodology, tool, etc. isn't the best, shouldn't we do everything we can to make sure that knowledge reaches far and wide so everyone builds better software? Like other engineering disciplines, feelings have no place in this world. Does a structural engineer reserve judgement on a bridge design just because it might upset one of their peers? People might die if they do.</p>
<p>But here's the thing: open source is different. It's different because the people behind it, the people making it tick, are largely doing that for free, in their own time, without a defined and accepted convention for compensation (monetary or otherwise). As a global software community, we are in the midst of trying to figure out how to make open source more sustainable. There's been a lot of discussion around this lately and there will continue to be a lot more. It's a real problem without a good solution right now. And one of the biggest sustainability challenges is how to prevent burn-out. How do we, as software practitioners who rely more and more on the generosity and selflessness of open source maintainers, ensure that those maintainers stay healthy?</p>
<p>I think one of the ways we do it is by recognizing that this open source system we rely so heavily on has an intrinsic component of humanity that can't be abstracted away like we do with other economic systems. More directly, maintainers are people and people have feelings. But why is this different? Why shouldn't I feel comfortable saying something like "Kia cars are crap that don't get good gas mileage so you shouldn't buy one"? There's a few reasons why open source is different. For one, it's more direct. The maintainers working on the open source that you consume are typically peers that you interact with directly or through a minimum of hops. They hear what you say and read what you write.</p>
<p>Another difference is that those maintainers are often making personal sacrifices for your benefit, usually with very little in return. Sometimes open source is a job, but most of the time it's personal. That library you just took a dependency on can often be measured in weekends on a laptop instead of playing outdoors, evenings writing code instead of watching a movie with family, sleepless nights wondering about the best way to architecture a new feature. Those are choice the maintainers made so we shouldn't regret or feel bad about them, but we should at least recognize that there's likely a measure of personal sacrifice involved.</p>
<p>To get analytical, open source is an economy. And like any economy there are "currencies". In this case, the currency is often intangible. Open source maintainers do what they do for a variety of reasons. I suspect that while working on open source is a selfless act for many maintainers, it's not entirely so. There are other currencies involved. Some maintainers would like name recognition. Some seek approval or acceptance. Some believe it will have indirect monetary benefits like helping to land a better job. In all of these cases, there's an expectation that the consumers of open source will end up paying that currency in one way or another in exchange for the work provided. The fact that this exchange is both assumed and poorly defined is a problem for a different blog post, but it's there regardless.</p>
<p>The point I'm trying to make is that different "currencies" like approval, name recognition, acceptance, and networking all have a basis in personal interactions...or to put it another way: <em>feelings</em>. We <em>can't</em> separate feelings from open source because feelings are intrinsic in the assumed exchange between maintainers and consumers. To ignore them is to ignore a large part of the implicit contract that's present in open source, whether we like it or not. One thing we can all do, maintainers and consumers alike, to strengthen our open source communities is to recognize the presence and value of feelings...of humanity...in this system, accept it, and integrate that understanding into our interactions.</p>
<p>So to come back to the original discussion about presenting alternatives, does this mean we should never talk about how one tool is better than another? That we should never present comparisons or pros and cons? Absolutely not, that's absurd. The case I'm making is to be softer in our approach. To realize that the words you use and the phrasing of your advice has a real impact and actual consequences on the very software community you're trying to improve. That in the equation of how to discuss open source we have to account for the feelings of maintainers as one of many variables instead of dismissing them as irrelevant. Sometimes...many times...that's just a matter of phrasing or presentation. For example, instead of talking about why a tool is bad, try talking about why it's good first. Instead of explaining why you <em>wouldn't</em> use something, try explaining why you <em>would</em> first. Write comparison tables with both cons <em>and</em> pros. Treat discussions about open source as if you were personally discussing the maintainers themselves, because for them it often is personal. In other words: a little kindness and empathy goes a long way.</p>
The personal blog of Dave Glick
https://daveaglick.com/posts/announcing-azurepipelines-testlogger
Announcing AzurePipelines.TestLogger
2018-12-17T00:00:00Z
<p>In today's episode of "what crazy niche has Dave gotten sucked into this time?" I announce a new test logger for the Visual Studio Test Platform designed to publish your test results in real-time to Azure Pipelines. This means that you can run <code>dotnet test</code> from your build script on Azure Pipelines and feed your test results directly to the test summary for your build without having to rely on post-processing like the <code>PublishTestResults</code> Azure Pipelines task.</p>
<p>Before we get to publishing results to Azure Pipelines, let's back up a step and briefly consider what a Visual Studio Test Platform test logger actually is. <a href="https://github.com/Microsoft/vstest-docs/blob/master/docs/report.md">According to the official docs</a>, "A test logger is a test platform extension to control reporting of test results.". That's not particularly helpful. What it really means is that you can write a library to hook into what's happening with your test run and do something with that information. The API for this isn't great or well documented, basically a single interface with a few event handlers, but it's enough to get details about each test run.</p>
<p>The AzurePipelines.TestLogger then registers handlers for these test events, builds a heirarchy from the test and source (I.e., file) names, and publishes that to Azure Pipelines while your tests are running using the <a href="https://docs.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-5.0">Azure DevOps REST API</a>. There were some tricky parts such as figuring out which version of the API to specify for which endpoint (each endpoint is versioned a little differently, particularly with preview versions). Getting some of the Azure Pipelines-specific data like nested test results and parent test durations to work was also a challenge. Now that I've worked through everything, I rather like the result:</p>
<p><img src="https://daveaglick.com/posts/images/test-summary.png" class="img-fluid"></p>
<p>Each test "run" is shown at the root of the result tree (a run is the combination of test assembly and build job/agent). Then each test fixture or class is shown at the second level with it's fully qualified name (minus the root namespace). Nested classes are shown with <code>+</code> notation. Then individual tests are displayed at the third level. This three-deep heirarchy keeps very large test runs nice and tidy. On the downside, the Azure Pipelines test summary will only show statistics for top-level tests. That's not ideal for a logger that nests results like this one, but the clarity of grouping tests under their fixture is more valuable than listing a correct total in the test summary in my opinion. Thankfully the pass/fail will still "bubble up" so even though the summary may show fewer tests than actually exist, it'll still correctly indicate if any tests are failing (which would then require a drill-down to figure out which ones are failing). <a href="https://developercommunity.visualstudio.com/content/idea/409015/show-all-tests-in-the-hierarchy-in-test-summary.html">There's an open feature suggestion here for showing all nested tests in the summary</a>.</p>
<p>If you're using .NET and Azure Pipelines and you need this in your life, <a href="https://github.com/daveaglick/AzurePipelines.TestLogger">head on over to the GitHub repository</a> for installation and usage instructions. Happy testing.</p>
The personal blog of Dave Glick
https://daveaglick.com/posts/code-from-your-phone
Code From Your Phone
2018-11-19T00:00:00Z
<p>I've long been a fan of <a href="https://daveaglick.com/posts/development-on-the-go">mobile development workflows</a>. I've also been interested in the convergence of .NET Core on Linux and containers as a way to enable rapid, self-contained .NET development environments. It turns out that updates to mobile tools, improved container hosting, and a little elbow grease can create a very nice mobile development setup that includes the ability to easily work with GitHub and git, edit files, and run builds and unit tests all from your phone or tablet (assuming your phone or tablet is running iOS - someone else will have to figure out how to do this on Android).</p>
<h1 id="the-tools">The Tools</h1>
<p>First let's look at the tools that we're going to use to make this possible.</p>
<h2 id="codeanywhere">Codeanywhere</h2>
<p>I've had my eye on the <a href="https://codeanywhere.com/">Codeanywhere</a> service for a while, but it wasn't until recently with a refreshed app and updated containers that it really made sense for .NET development. I've happy to report it's been working very well for me now. This will provide the containers we're going to use for development, but more importantly, it's going to provide out-of-the-box SSH access and a fantastic app for managing our containers and interacting with the terminal.</p>
<p>While Codeanywhere does have free plans that might work for you, I'm going to suggest the "Freelancer" plan which provides more functionality and most importantly, provides increased container disk quotas (which can be an issue with .NET development). It runs about $84/year (or $10/month) which is well worth it in my opinion if you're serious about mobile development.</p>
<h2 id="working-copy">Working Copy</h2>
<p><a href="https://workingcopyapp.com/">Working Copy</a> is an iOS git client app and it continues to improve at a rapid pace. It's great on it's own, but really shines when considering that many ways it can integrate with other apps and services. More specifically, we're going to use a new feature that was recently introduced which lets you <a href="https://workingcopyapp.com/manual/ssh-upload">upload your repository to an SSH server</a>.</p>
<h2 id="textastic">Textastic</h2>
<p>While the Codeanywhere app provides an excellent file editing experience, <a href="https://www.textasticapp.com/">Textastic</a> integrates directly with Working Copy and since we're going to use the mobile device as the "source of truth" for our repository we need to edit files locally. Thankfully, it's also an amazing coding editor with syntax highlighting, a code-oriented keyboard, and more and only runs $10 to boot.</p>
<h2 id="pricing">Pricing</h2>
<p>While not expensive, this isn't going to be a totally free setup either. Each of these tools are robust and well developed and rightly charge for their use. If you ask me, the cost of each is a bargain considering what they do.</p>
<ul>
<li><strong>Codeanywhere</strong>: $84/year for the "Freelancer" plan</li>
<li><strong>Working Copy</strong>: $16 for the pro unlock which enables SSH features (among many others)</li>
<li><strong>Textastic</strong>: $10</li>
</ul>
<h1 id="the-setup">The Setup</h1>
<h2 id="create-your-container">Create Your Container</h2>
<p>The first step is to create your container on Codeanywhere. To do so, create an account, <a href="https://codeanywhere.com/dashboard">open your dashboard</a>, and create a new project (you might be prompted to create your first project automatically when you create your account as well). When you open your new project you'll be prompted to add a new container:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-new-container.png" class="img-fluid"></p>
<p>Select the Ubuntu .NET Core image (or Centos if that's your thing) and then select "Create". That's all you need to do and your new container will spin up in the background.</p>
<p>Once it's online you can connect to it within the app using SSH:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-container-ssh.png" class="img-fluid"></p>
<p>...and check the .NET Core version:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-dotnet-version.png" class="img-fluid"></p>
<h2 id="clone-into-working-copy">Clone Into Working Copy</h2>
<p>The next step is to clone the repository we want to work on into Working Copy. You can use Working Copy's <a href="https://workingcopyapp.com/manual/hosting-provider">hosting provider integration</a> or just clone straight from the repository URL:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-clone.png" class="img-fluid"></p>
<p>I'll use my project <a href="https://github.com/daveaglick/MsBuildPipeLogger">MsBuildPipeLogger</a> as an example for the rest of this post.</p>
<h2 id="get-ssh-information-from-container">Get SSH Information From Container</h2>
<p>Now we're going to configure Working Copy to upload and synchronize changes to our container.</p>
<p>The first step is to figure out our SSH host. Click on the container actions button and then select "Info":</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-container-actions.png" class="img-fluid"></p>
<p>From the info screen, look for the hostname and port and note it down:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-container-info.png" class="img-fluid"></p>
<p>Next we're going to get the private SSH key from our container. This is stored at <code>/home/cabox/.ssh/id_rsa</code> (note also that the default user is <code>cabox</code>). The easiest way to get it's contents is to copy it to your root with the command <code>cp /home/cabox/.ssh/id_rsa</code>. Open a terminal and type that in:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-copy-key.png" class="img-fluid"></p>
<p>Then open that file directly from the Codeanywhere container file browser and copy it's contents to the clipboard:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-copy-key-2.png" class="img-fluid"></p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-copy-key-3.png" class="img-fluid"></p>
<p>Note that the Codeanywhere file editor can be a little finicky and you might have to try to get the whole file contents within the text selector a couple times before you get everything. The goal is to get your private key onto the iOS clipboard where Working Copy can get to it.</p>
<h2 id="configure-working-copy-for-ssh-upload">Configure Working Copy for SSH Upload</h2>
<p>Now we're going to move over to Working Copy and add our SSH key. Open the settings from the upper-right gear icon and then select "SSH Keys":</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-ssh-keys.png" class="img-fluid"></p>
<p>Then add the SSH key we copied from Codeanywhere by clicking the + icon in the upper-right and selecting "Import from Clipboard":</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-import-ssh.png" class="img-fluid"></p>
<p>Once you're done you should see the new key in the list of keys.</p>
<p>Now we'll add SSH Upload support to the repository we cloned. This will make Working Copy synchronize all changes within the app to the remote container. To add SSH support, open the repository and then click on the repository "Status and Configuration":</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-repo-config.png" class="img-fluid"></p>
<p>Then click the iOS action button in the upper-right corner and select "SSH Upload" from the list of actions:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-repo-actions.png" class="img-fluid"></p>
<p>Once you add the SSH host and port, Working Copy will ask if you want to accept the server key and then ask you to authenticate. Use <code>cabox</code> as the username and leave the password blank to use the SSH key you just added to Working Copy.</p>
<p>Once that's done, we'll select the remote folder we want to synchronize to. Select the folder icon next to "Remote" to open the folder selection:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-ssh-folder.png" class="img-fluid"></p>
<p>Then select "workspace" and add a new subfolder for our files:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-ssh-new-folder.png" class="img-fluid"></p>
<p>When you're all ready, select the "Upload" button to initiate the synchronization:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-upload.png" class="img-fluid"></p>
<h2 id="edit-files">Edit Files</h2>
<p>Now let's switch gears a little bit and edit one of our files like the readme. Open the Textastic app, select "Open..." and then select Working Copy as the location to open files from:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-open-folder.png" class="img-fluid"></p>
<p>That will show folders for each of the repositories in Working Copy. Select "Select" from the top menu, highlight the repository folder you want to open, and then select "Open" from the top menu:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-select-folder.png" class="img-fluid"></p>
<p>That will add the folder to Textastic and allow you to open and edit files in it:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-edit-file.png" class="img-fluid"></p>
<h2 id="synchronize-changes">Synchronize Changes</h2>
<p>After you've edited some files, switch back to Working Copy. You'll probaby get a message about no longer being able to upload in the background. That's okay, just open the SSH settings from the terminal icon in the upper-right corner (if they're not already open), and select "Upload" to initiate the synchronization.</p>
<p>Now that the files are saved into Working Copy from Textastic, you can also initiate git commands such as committing your changes before or after you work with the files from your container.</p>
<h2 id="open-terminal-to-run-a-build">Open Terminal To Run A Build</h2>
<p>The last thing we'll do is switch back to Codeanywhere to open a terminal and run a build with our newly changed files. Open the SSH terminal for your container from within Codeanywhere and the folder you created from Working Copy should now be there:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-sync-folder.png" class="img-fluid"></p>
<p>As with any bash prompt, you can open the folder and run a build:</p>
<p><img src="https://daveaglick.com/posts/images/code-from-your-phone-build.png" class="img-fluid"></p>
<h1 id="alternate-setups">Alternate Setups</h1>
<p>What I've described is only one way you could use these tools. Some alternatives include:</p>
<ul>
<li>Do everything from the Codeanywhere app. It includes the ability to browse the file structure in your container and edit files with a nice coding keyboard directly from the app. The downside is you'll need to manage all your git commands yourself from the command line which can get cumbersome from a mobile device.</li>
<li>Set up <a href="https://workingcopyapp.com/manual/ssh-command">SSH Commands</a> in Working Copy to let you run build or other commands directly from the app. This might be preferable to switching over to the Codeanywhere app for commonly run commands.</li>
</ul>
The personal blog of Dave Glick
https://daveaglick.com/posts/pushing-packages-from-azure-pipelines-to-azure-artifacts-using-cake
Pushing Packages From Azure Pipelines To Azure Artifacts Using Cake
2018-11-06T00:00:00Z
<p>This is a short post about using <a href="https://cakebuild.net/">Cake</a> to publish packages from <a href="https://azure.microsoft.com/en-us/services/devops/pipelines">Azure Pipelines</a> to <a href="https://azure.microsoft.com/en-us/services/devops/artifacts">Azure Artifacts</a> that took me the better part of a day to figure out. For completness I'll walk through my entire process but if you just want to know how to do it, skip to the end.</p>
<p>I've been a very happy user of <a href="https://www.appveyor.com/">AppVeyor</a> and <a href="https://www.myget.org/">MyGet</a> for my open source work. At my day job we use an on-premesis <a href="https://www.atlassian.com/software/bamboo">Bamboo</a> server which also sends packages to MyGet. In both cases, publishing a package from a Cake build script is relativly straightforward and basically involves getting an API key from MyGet and feeding that to the Cake <a href="https://cakebuild.net/api/Cake.Common.Tools.NuGet/NuGetAliases/08163C34"><code>NuGetPush</code></a> alias. Now that I'm investigating moving some of the workloads at my day job to <a href="https://dev.azure.com/">Azure DevOps services</a>, I'm finding this simple task isn't so straightforward.</p>
<h1 id="personal-access-token">Personal Access Token</h1>
<p>My first attempt was to get an <a href="https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate">Azure DevOps personal access token</a> with package management grants and feed that to the <code>NuGetPush</code> Cake alias, just like I was used to doing with MyGet. That resulted in error messages that look like this:</p>
<pre><code>Unable to load the service index for source https://pkgs.dev.azure.com/xyz/_packaging/xyz/nuget/v3/index.json.
Response status code does not indicate success: 401 (Unauthorized).
</code></pre>
<p>After that, I took the most resonable first troubleshooting step...and ranted on Twitter:</p>
<blockquote class="twitter-tweet" data-partner="tweetdeck"><p lang="en" dir="ltr">Good grief. I always feel so dumb when trying to do anything with Azure. Can't figure out how to push a package from Azure Pipelines to Azure Artifacts using an API key. And no, I don't want to setup and use a special credential provider. This was so easy with AppVeyor/MyGet.</p>— Dave Glick (@daveaglick) <a href="https://twitter.com/daveaglick/status/1059801965415272448?ref_src=twsrc%5Etfw">November 6, 2018</a></blockquote>
<p>Unfortunatly the answer wasn't what I wanted to see:</p>
<blockquote class="twitter-tweet" data-conversation="none" data-cards="hidden" data-partner="tweetdeck"><p lang="en" dir="ltr">Thanks for the feedback. There are some under-the-hood reasons why we don't support apikey. But, Azure Pipelines has a "NuGet" task (or ".NET Core" if you prefer) that will automatically authenticate to Azure Artifacts for both push and restore.</p>— Alex Mullans (@alexmullans) <a href="https://twitter.com/alexmullans/status/1059811282851905536?ref_src=twsrc%5Etfw">November 6, 2018</a></blockquote>
<p>It turns out that you can't use a personal access token as an API key to publish packages to Azure Artifacts.</p>
<h1 id="credential-provider">Credential Provider</h1>
<p>My next step was to take a look at the <a href="https://github.com/Microsoft/artifacts-credprovider">VSTS Credential Provider</a>. It's essentially <a href="https://docs.microsoft.com/en-us/azure/devops/artifacts/get-started-nuget#publish-a-package">the only documented way of publishing a package</a>. Thankfully the credential provider is on NuGet as <a href="https://www.nuget.org/packages/Microsoft.VisualStudio.Services.NuGet.CredentialProvider">Microsoft.VisualStudio.Services.NuGet.CredentialProvider</a> so you can add it as a tool to your Cake script:</p>
<pre><code>#tool "nuget:?package=Microsoft.VisualStudio.Services.NuGet.CredentialProvider&version=0.37.0"
</code></pre>
<p>Once you've installed it, you need to tell NuGet where to find it. Fortunatly there's an environment variable called <code>NUGET_CREDENTIALPROVIDERS_PATH</code> that NuGet uses to find credential providers. We can set it from our Cake script like this:</p>
<pre><code>var credentialProviderPath = GetFiles("**/CredentialProvider.VSS.exe").First().FullPath;
Information("Setting NUGET_CREDENTIALPROVIDERS_PATH to " + credentialProviderPath);
System.Environment.SetEnvironmentVariable("NUGET_CREDENTIALPROVIDERS_PATH", credentialProviderPath, EnvironmentVariableTarget.Machine);
</code></pre>
<p>Less fortunatly, this doesn't seem to work at all. In fact, I couldn't get NuGet to recognize the <code>NUGET_CREDENTIALPROVIDERS_PATH</code> environment variable no matter how it was set (and I tried everything, including using the <code>NuGetPushSettings.EnvironmentVariables</code> property). That led to just copying the credential provider alongside <code>nuget.exe</code>:</p>
<pre><code>var credentialProviderPath = GetFiles("**/CredentialProvider.VSS.exe").First().FullPath;
var nugetPath = GetFiles("**/nuget.exe").First().GetDirectory();
CopyFiles(new [] { credentialProviderPath }, nugetPath);
</code></pre>
<p>This allowed NuGet to find the credential provider, but at that point I couldn't figure out how to automatically get it to authenticate:</p>
<pre><code>CredentialProvider.VSS: Getting new credentials for source:https://pkgs.dev.azure.com/xyz/_packaging/xyz/nuget/v3/index.json, scope:vso.packaging_write vso.drop_write
CredentialProvider.VSS: Couldn't get an authentication token for https://pkgs.dev.azure.com/xyz/_packaging/xyz/nuget/v3/index.json.
Unable to load the service index for source https://pkgs.dev.azure.com/xyz/_packaging/xyz/nuget/v3/index.json.
Response status code does not indicate success: 401 (Unauthorized).
</code></pre>
<p>Most of the documentation talks about using the credential provider interactivly, either by displaying a UI or prompting for credentials on the command line. I'm sure there's a way to make this work from a script, but I was getting pretty frustrated with the credential provider at this point.</p>
<h1 id="system.accesstoken">System.AccessToken</h1>
<p>I was tipped off by my Cake buddies to some blog posts from <a href="https://kevsoft.net/2018/08/06/configuring-private-vsts-nuget-feeds-with-cake.html">Kevin Smith</a> and <a href="https://tech.trailmax.info/2017/01/publish-to-vsts-nuget-feed-from-cakebuild/">Max Vasilyev</a> about using OAuth tokens for publishing to VSTS. It turns out Azure Pipelines exposes a special pipeline variable named <code>System.AccessToken</code> that contains an OAuth key for the VSTS/Azure DevOps REST API. <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables#systemaccesstoken">You have to manually activate it from your YAML file</a>:</p>
<pre><code>variables:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
</code></pre>
<p>That <em>should</em> provide access to a <code>SYSTEM_ACCESSTOKEN</code> environment variable from inside your scripts, but...wait for it...:</p>
<pre><code>Could not resolve SYSTEM_ACCESSTOKEN
</code></pre>
<p>Bet you saw that coming. For some reason, I couldn't figure out how to set the enviornment variable globally, but I was able to set it at the script level:</p>
<pre><code>steps:
- script: build -target Publish
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
</code></pre>
<p>Once that's done, you can register a NuGet feed from inside your Cake script using the access token and then use it when publishing a package. Here's my working package publishing task inside my Cake script:</p>
<pre><code>Task("Publish")
.IsDependentOn("Pack")
.WithCriteria(() => isRunningOnBuildServer)
.Does(() =>
{
// Get the access token
var accessToken = EnvironmentVariable("SYSTEM_ACCESSTOKEN");
if (string.IsNullOrEmpty(accessToken))
{
throw new InvalidOperationException("Could not resolve SYSTEM_ACCESSTOKEN.");
}
// Add the authenticated feed source
NuGetAddSource(
"VSTS",
"https://pkgs.dev.azure.com/xyz/_packaging/xyz/nuget/v3/index.json",
new NuGetSourcesSettings
{
UserName = "VSTS",
Password = accessToken
});
foreach (var nupkg in GetFiles(buildDir.Path.FullPath + "/*.nupkg"))
{
NuGetPush(nupkg, new NuGetPushSettings
{
Source = "VSTS",
ApiKey = "VSTS"
});
}
});
</code></pre>
<p>Note the use of "VSTS" for <code>UserName</code> and <code>ApiKey</code>. That's basically a dummy value - NuGet requires <em>something</em> for those properties but it doesn't really care what. The important part is that the <code>SYSTEM_ACCESSTOKEN</code> environment variable is being used as the <code>Password</code> for the <code>NuGetSourcesSettings</code>, and that the name of the new source matches the <code>Source</code> property in the <code>NuGetPushSettings</code>.</p>
<p>Hopefully this post saves you a bit of time. Once it's set up it appears to work well, but discovering the "right way" of doing this took longer than it should have (if this even is the right way).</p>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
The personal blog of Dave Glick
https://daveaglick.com/posts/the-bleeding-edge-of-razor
The Bleeding Edge Of Razor
2018-10-22T00:00:00Z
<p>Over the years there's been a number of projects designed to make using Razor templates from your own code easier. For a while, these third-party libraries were the only way to easily use Razor outside ASP.NET MVC because using the ASP.NET code directly was too complicated. That started to change with ASP.NET Core and the ASP.NET team has slowly started to address this use case. In this post we'll take a look at the current bleeding edge of Razor and how you can use it today to enable template rendering in your own application.</p>
<p>Before we start looking at code, let's back up a step and consider what Razor is (and what it isn't). At it's core, Razor is a templating language. Templating languages are designed to make producing output content easier by intermixing raw output with instructions on how to generate additional programmatically-based output. In this case, Razor is used to produce HTML documents. An important distinction that I want to make here is that Razor <em>is not</em> the set of HTML helpers and other support functionality that comes along with ASP.NET MVC. For example, helpers like <code>Html.Partial()</code> and page directives like <code>@section</code> aren't part of the Razor language. Instead they're shipped with ASP.NET MVC as additional support on top of Razor, which your Razor code can use.</p>
<p>This distinction wasn't always clear, but recently the ASP.NET team has been focusing on separating Razor <em>the language</em> from Razor <em>for ASP.NET MVC</em>. This is partly out of necessity as Razor has grown to support at least three different dialects (ASP.NET MVC, Razor Pages, and Blazor), but it also makes using Razor for your own purposes easier too.</p>
<h1 id="rendering-phases">Rendering Phases</h1>
<p>Turning Razor content from a string, file, or other source into final rendered HTML requires several phases:</p>
<ul>
<li>Generating C# code from the template</li>
<li>Compiling the C# code into an assembly</li>
<li>Loading the assembly into memory</li>
<li>Executing your compiled template</li>
</ul>
<p>I'll discuss each phase in more detail below. Before I do, note that Razor is under heavy development (and has been for a while). Even though a lot of the API is surfaced as public, it's been known to break in subtle ways between releases. On top of that, I learned most of this through trial-and-error and reverse engineering and make no assurances that this is the canonical way or even a correct way of doing any of this. You've been warned.</p>
<h2 id="generating-code">Generating Code</h2>
<p>A Razor template starts life as a string (or file) with intermixed HTML, C# code, and Razor directives. You can think of this template as a little program that takes input like your page model and outputs the resulting HTML. Like any program it needs to be compiled and executed. The first part of this process essentially "inverts" the HTML and C# code in the template and creates C# code that "prints" the HTML parts of your template along with the raw code that you added to your template.</p>
<p>This phase is where a lot of the recent work in Razor has been focused. It used to be that the process of converting a Razor template to C# code happened as part of the overall MVC Razor processing. Now, a series of libraries under <code>Microsoft.AspNetCore.Razor.Language</code> separates Razor <em>the language</em> from Razor <em>for ASP.NET MVC</em>.</p>
<p>Here's how to take a Razor template stored in the file <code>C:\Code\RazorExample\date.cshtml</code> and generate C# from it (you'll need to add the <code>Microsoft.AspNetCore.Razor.Language</code> package to get access to these classes):</p>
<pre><code>RazorConfiguration config = RazorConfiguration.Default;
RazorProjectFileSystem projectFileSystem =
RazorProjectFileSystem.Create(@"C:\Code\RazorExample");
RazorProjectEngine projectEngine =
RazorProjectEngine.Create(config, projectFileSystem);
RazorProjectItem projectItem = projectFileSystem.GetItem("date.cshtml");
RazorTemplateEngine templateEngine =
new RazorTemplateEngine(projectEngine.Engine, projectFileSystem);
RazorCodeDocument codeDocument = templateEngine.CreateCodeDocument(projectItem);
RazorCSharpDocument cSharpDocument = templateEngine.GenerateCode(codeDocument);
</code></pre>
<p>Given a <code>data.cshtml</code> file that looks like this:</p>
<pre><code><p>@DateTime.Now</p>
</code></pre>
<p>This will produce the following C# code in <code>cSharpDocument.GeneratedCode</code>:</p>
<pre><code>#pragma checksum "E:\Code\NewRazor\date.cshtml" "{ff1816ec-aa5e-4d10-87f7-6f4963833460}" "7dea33102781d0fc7059874abc785e31de14ef37"
// <auto-generated/>
#pragma warning disable 1591
[assembly: global::Microsoft.AspNetCore.Razor.Hosting.RazorCompiledItemAttribute(typeof(Razor.Template), @"default", @"/date.cshtml")]
namespace Razor
{
#line hidden
[global::Microsoft.AspNetCore.Razor.Hosting.RazorSourceChecksumAttribute(@"SHA1", @"7dea33102781d0fc7059874abc785e31de14ef37", @"/date.cshtml")]
public class Template
{
#pragma warning disable 1998
public async override global::System.Threading.Tasks.Task ExecuteAsync()
{
WriteLiteral("<p>");
#line 1 "E:\Code\NewRazor\date.cshtml"
Write(DateTime.Now);
#line default
#line hidden
WriteLiteral("</p>");
}
#pragma warning restore 1998
}
}
#pragma warning restore 1591
</code></pre>
<p>Let's break that down just a little bit...</p>
<p>The <code>RazorProjectFileSystem</code> is responsible for presenting available files and their content to the Razor engine. It's primary job is to create <code>RazorProjectItem</code> instances given a path. These <code>RazorProjectItem</code> objects contain metadata about the requested file as well as access to a <code>Stream</code> (if the file exists). The default <code>RazorProjectFileSystem</code> obtained by the call to <code>RazorProjectFileSystem.Create(string root)</code> is aptly named <code>DefaultRazorProjectFileSystem</code> and wraps <code>System.IO</code> classes like <code>FileInfo</code> and <code>FileStream</code>. If you want to access files differently (like from a database), you'll need to implement your own <code>RazorProjectFileSystem</code> and <code>RazorProjectItem</code>.</p>
<p>The <code>RazorProjectEngine</code> is the workhorse here. It slices up your template, applies a sequence of processing phases to it to construct a syntax tree, and then lowers that syntax tree into C#. If you need to adjust the way Razor generates your code, it'll probably be through the <code>RazorProjectEngine</code>. In future posts I'll probably take a look at some of these possibilities.</p>
<p>Like the <code>RazorProjectEngine</code>, the <code>RazorTemplateEngine</code> also participates in generating code. It's main job is essentially to add imports and other required functionality to your generated code and then defer to the <code>RazorProjectEngine</code> for processing of the syntax tree.</p>
<p>Finally, <code>RazorCodeDocument</code> contains the abstract representation of your template and <code>RazorCSharpDocument</code> contains the final produced C# code.</p>
<h2 id="compiling-the-code">Compiling The Code</h2>
<p>Now that we have some C# code, we need to compile it. We're done with the Razor language bits (at least for now) and we'll use Roslyn to compile our code:</p>
<pre><code>SourceText sourceText = SourceText.From(cSharpDocument.GeneratedCode, Encoding.UTF8);
SyntaxTree syntaxTree = CSharpSyntaxTree.ParseText(sourceText);
CSharpCompilationOptions compilationOptions =
new CSharpCompilationOptions(OutputKind.DynamicallyLinkedLibrary)
.WithSpecificDiagnosticOptions(
new Dictionary<string, ReportDiagnostic>
{
// Binding redirects
{ "CS1701", ReportDiagnostic.Suppress },
{ "CS1702", ReportDiagnostic.Suppress },
{ "CS1705", ReportDiagnostic.Suppress },
{ "CS8019", ReportDiagnostic.Suppress }
});
CSharpCompilation compilation =
CSharpCompilation.Create(
"RazorTest",
options: compilationOptions,
references: GetMetadataReferences())
.AddSyntaxTrees(syntaxTree);
</code></pre>
<p>In the first step we're loading the code in <code>cSharpDocument.GeneratedCode</code> into a Roslyn <code>SourceText</code> and then constructing a Roslyn <code>SyntaxTree</code> from it (which is different than a Razor syntax tree).</p>
<p>In the next statement, we're creating the options for our compilation. Specifically, we want to produce a library so we use <code>OutputKind.DynamicallyLinkedLibrary</code> and then turn off certain diagnostics that we know will be troublesome (you can adjust the list of suppressed diagnostics however you see fit).</p>
<p>In the last statement we prepare the code for compilation by using a Roslyn <code>CSharpCompilation</code>. This uses a factory <code>.Create()</code> method that takes a variety of arguments. In the code above, we're passing the name of the assembly ("RazorTest"), the options we created in the statement above, and a list of references we got by calling <code>GetMetadataReferences()</code> (more on that in just a second). The last call to our new <code>CSharpCompilation</code> object adds the syntax tree we constructed earlier.</p>
<p>As with any compiled code, the compiler needs to reference other libraries to find functionality. Some of these are in-the-box code libraries (like CoreFx) and others are your own assemblies that your Razor template uses. I separated this part into a <code>GetMetadataReferences()</code> method to keep the code clean:</p>
<pre><code>private static List<MetadataReference> GetMetadataReferences() =>
new List<MetadataReference>()
{
GetMetadataReference(typeof(InputTagHelper)),
GetMetadataReference(typeof(UrlResolutionTagHelper)),
GetMetadataReference(typeof(RazorCompiledItemAttribute)),
GetMetadataReference(typeof(IModelExpressionProvider)),
GetMetadataReference(typeof(IUrlHelper)),
GetMetadataReference(typeof(object)),
GetMetadataReference(typeof(DynamicAttribute)),
GetMetadataReference(
"System.Runtime, Version=0.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"),
GetMetadataReference(
"netstandard, Version=2.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51")
};
private static MetadataReference GetMetadataReference(Type type) =>
MetadataReference.CreateFromFile(type.GetTypeInfo().Assembly.Location);
private static MetadataReference GetMetadataReference(string assemblyName) =>
MetadataReference.CreateFromFile(Assembly.Load(assemblyName).Location);
</code></pre>
<p>This code either loads assembly references by using a type that we know to be in the assembly or using the full name of the assembly (assuming the assembly binder can find it). This set of references should support a minimal Razor template compilation, but you may need to add or adjust it depending on your own template.</p>
<h2 id="loading-the-assembly">Loading The Assembly</h2>
<p>In the interest of full disclosure, the code in the previous section doesn't actually <em>compile</em> our template, it just sets up the Razor compiler. The actual compilation happens in this phase at the same time we emit our new template assembly to memory:</p>
<pre><code>Assembly assembly;
EmitOptions emitOptions =
new EmitOptions(debugInformationFormat: DebugInformationFormat.PortablePdb);
using (MemoryStream assemblyStream = new MemoryStream())
{
using (MemoryStream pdbStream = new MemoryStream())
{
EmitResult result = compilation.Emit(
assemblyStream,
pdbStream,
options: emitOptions);
if (!result.Success)
{
List<Diagnostic> errorsDiagnostics = result.Diagnostics
.Where(d => d.IsWarningAsError || d.Severity == DiagnosticSeverity.Error)
.ToList();
foreach (Diagnostic diagnostic in errorsDiagnostics)
{
FileLinePositionSpan lineSpan =
diagnostic.Location.SourceTree.GetMappedLineSpan(
diagnostic.Location.SourceSpan);
string errorMessage = diagnostic.GetMessage();
string formattedMessage =
"("
+ lineSpan.StartLinePosition.Line.ToString()
+ ":"
+ lineSpan.StartLinePosition.Character.ToString()
+ ") "
+ errorMessage;
Console.WriteLine(formattedMessage);
}
return;
}
assemblyStream.Seek(0, SeekOrigin.Begin);
pdbStream.Seek(0, SeekOrigin.Begin);
assembly = Assembly.Load(assemblyStream.ToArray(), pdbStream.ToArray());
}
}
</code></pre>
<p>Most of the work here happens in the <code>compilation.Emit()</code> method. We pass it an options object that tells it we want to produce a portable PDB (which will get embedded in the in-memory assembly and can be used for debugging the template). This method compiles and serializes the assembly to a stream.</p>
<p>The bulk of the code here deals with error reporting. Once the compilation and emit is done, the <code>EmitResult</code> object will contain a <code>Success</code> property that tells you if the compilation was successful. If it wasn't, you can get compilation errors by examining the <code>EmitResult.Diagnostics</code> property. The rest of the code above just formats a nice message using Roslyn line span information (normally, I'd create <code>formattedMessage</code> using string interpolation, but I used string concatenation instead to make it clearer what's going on for this post).</p>
<p>Finally, we reset the assembly and PDF streams to the start (now that Roslyn has written to them) and pass them to <code>Assembly.Load()</code> to construct an in-memory assembly we can use in the next phase.</p>
<h2 id="executing-the-template">Executing The Template</h2>
<p>At this point we have an assembly that contains the compiled version of our template. All we have to do now is run it:</p>
<pre><code>RazorCompiledItemLoader loader = new RazorCompiledItemLoader();
RazorCompiledItem item = loader.LoadItems(assembly).SingleOrDefault();
RazorPage<dynamic> page = (RazorPage<dynamic>)Activator.CreateInstance(item.Type);
TextWriter writer = new StringWriter();
page.ViewContext = new ViewContext()
{
Writer = writer
};
page.HtmlEncoder = HtmlEncoder.Default;
page.ExecuteAsync().GetAwaiter().GetResult();
Console.WriteLine(writer.ToString());
</code></pre>
<p>The <code>RazorCompiledItemLoader</code> knows how to use reflection to find the class that represents your template in the assembly. Information about that class gets returned as a <code>RazorCompiledItem</code> which, among other things, contains the type of your template class.</p>
<p>We can create an instance of the class using <code>Activator</code> (though you can certainly use expression trees or some other mechanism to instantiate it via reflection). By default, Razor templates inherit from <code>RazorPage<TModel></code> and the default model is <code>dynamic</code> so the instance we end up with is a <code>RazorPage<dynamic></code> (also why we needed to make sure we loaded the assembly that contains <code>DynamicAttribute</code> when gathering <code>MetadataReference</code> objects, because that assembly is responsible for <code>dynamic</code> support).</p>
<p>When a <code>RazorPage</code> is executed, it requires a few things like a <code>ViewContext</code> and a <code>HtmlEncoder</code>. The code above creates a minimal <code>ViewContext</code> and you'll need to populate it further if your template uses other view features like the <code>ViewBag</code>. Then we call <code>RazorPage.ExecuteAsync()</code> to execute the template and get rendered HTML (I call it synchronously above, but presumably you'd be calling it in an <code>async</code> method and would <code>await</code> the call).</p>
<h1 id="bringing-it-all-together">Bringing It All Together</h1>
<p>Now that we've walked through how to do this on your own, it's time to mention that there are already libraries that do this for you using the new ASP.NET Core Razor engine. Two of my favorites are <a href="https://github.com/mholo65/gazorator">Gazorator</a> (by my friend <a href="https://twitter.com/mholo65">Martin Björkström</a>, without whom this post probably never would have happened) and <a href="https://github.com/toddams/RazorLight">RazorLight</a>. If you want to customize the process or have full control over the phases, the code above should get you started. However, if you just want to turn a Razor template into HTML I'd consider using one of these libraries to abstract all these details from your code.</p>
<h1 id="but-what-about-mvc">But What About MVC?</h1>
<p>If you start adding MVC conventions to your templates, you'll notice they either result in failures or just plain don't work. For example, if you add a layout to your template:</p>
<pre><code>@{
Layout = "_MyLayout.cshtml";
}
</code></pre>
<p>The layout simply won't be rendered. That's because the Razor language bits discussed above are a little bit leaky with regards to MVC. For example, the default <code>RazorPage</code> does have a <code>Layout</code> property so setting it in your template won't cause the compilation to fail. However, the out-of-the-box Razor language engine we use above doesn't know anything about layouts or how to render them. I'm planning on following up this post in the near future with an even deeper dive into the Razor engine where I'll discuss how to light up the MVC version of Razor you know and love and the extensibility mechanisms that are used to do so.</p>
The personal blog of Dave Glick
https://daveaglick.com/posts/msbuild-loggers-and-logging-events
MSBuild Loggers And Logging Events
2018-10-04T00:00:00Z
<p>I recently learned all about how MSBuild logging works and was surprised at how powerful it is. I was also disappointed how little information there is on the topic (though <a href="https://docs.microsoft.com/en-us/visualstudio/msbuild/logging-in-msbuild">the docs</a> are quite good). In this post I'll discuss what MSBuild logging is and how you can write your own cross-platform logger that can be plugged into any build process.</p>
<h1 id="logging-events">Logging Events</h1>
<p>When MSBuild executes it emits a sequence of events that describe the current phase and provide a whole bunch of information about it. This includes things like starting a task or target, raising a message, and warning and error output. <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.build.framework.ieventsource#events">The current set of individual events</a> is:</p>
<ul>
<li><code>BuildFinished</code></li>
<li><code>BuildStarted</code></li>
<li><code>CustomEventRaised</code></li>
<li><code>ErrorRaised</code></li>
<li><code>MessageRaised</code></li>
<li><code>ProjectFinished</code></li>
<li><code>ProjectStarted</code></li>
<li><code>StatusEventRaised</code></li>
<li><code>TargetFinished</code></li>
<li><code>TargetStarted</code></li>
<li><code>TaskFinished</code></li>
<li><code>TaskStarted</code></li>
<li><code>WarningRaised</code></li>
</ul>
<p>Don't let that relatively sparse set of events lead you to think there isn't much data to be had. Each one of these events is raised with it's own arguments, which can get quite large. For example, the <code>TargetStarted</code> event passes a <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.build.framework.targetstartedeventargs"><code>TargetStartedEventArgs</code></a> class that includes:</p>
<ul>
<li><code>BuildEventContext</code> with lots of data about the target location</li>
<li><code>Message</code></li>
<li><code>ParentTarget</code></li>
<li><code>ProjectFile</code></li>
<li><code>TargetFile</code></li>
<li><code>TargetName</code></li>
</ul>
<p>Writing a logger is all about responding to these events in different ways. In fact, the console output that you're used to seeing from MSBuild is actually generated from a <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.build.logging.consolelogger">normal logger</a> that converts these logging events into meaningful strings.</p>
<h1 id="writing-a-logger">Writing A Logger</h1>
<p><a href="https://docs.microsoft.com/en-us/visualstudio/msbuild/build-loggers">To create your own logger</a> you can either implement the <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.build.framework.ilogger"><code>ILogger</code></a> interface or derive from <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.build.utilities.logger"><code>Logger</code></a> (I recommend the latter).</p>
<p>Your logger will need to register for the events it wants to handle. This is done in the <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.build.utilities.logger.initialize"><code>Initialize</code></a> method which gives your logger an <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.build.framework.ieventsource"><code>IEventSource</code></a> instance. This event source contains the events that you should register handlers for (the same ones listed above, including a meta-event named <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.build.framework.ieventsource.anyeventraised"><code>AnyEventRaised</code></a> that calls your handler for all events).</p>
<p>For example, here's a simple logger that writes the start and end of each target out to the console:</p>
<pre><code class="language-csharp">using Microsoft.Build.Framework;
public class TargetLogger : Logger
{
public override void Initialize(IEventSource eventSource)
{
eventSource.TargetStarted +=
(sender, evt) => Console.WriteLine($"{evt.TargetName} started");
eventSource.TargetFinished +=
(sender, evt) => Console.WriteLine($"{evt.TargetName} finished");
}
}
</code></pre>
<h1 id="adding-your-logger-to-a-build">Adding Your Logger To a Build</h1>
<p>Once you've written your logger you need to compile it to an assembly and tell MSBuild to use it with the <code>/logger</code> switch from the <a href="https://docs.microsoft.com/en-us/visualstudio/msbuild/msbuild-command-line-reference">MSBuild command-line interface</a>:</p>
<pre><code class="language-cmd">msbuild /logger:TargetLogger,C:\Loggers\TargetLogger.dll ...
</code></pre>
<h1 id="passing-parameters">Passing Parameters</h1>
<p>One thing that's kind of neat about the MSBuild logging API is that you can pass whatever parameters you want from the command-line through to your logger. These are exposed as a <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.build.framework.ilogger.parameters"><code>Parameters</code></a> property in the <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.build.framework.ilogger"><code>ILogger</code></a> interface. That property will contain whatever string gets used on the command-line after a <code>;</code> when specifying the logger:</p>
<pre><code class="language-cmd">msbuild /logger:TargetLogger,C:\Loggers\TargetLogger.dll;MyParameters,Foo,Bar ...
</code></pre>
<p>Note that it's up to you to parse it however is appropriate from your <code>Initialize</code> method.</p>
<h1 id="writing-a-cross-platform-logger">Writing A Cross Platform Logger</h1>
<p>A challenge that I ran into was how to write a logger that could be used for both the Visual Studio version of MSBuild and the one that ships with the .NET Core SDK. These are essentially the same MSBuild, but each one targets a different runtime. The Visual Studio version of MSBuild targets .NET Framework 4.6 while the .NET Core SDK version of MSBuild targets either .NET Standard 2.0 or a close version of .NET Core (this seems to change with each SDK release). So the question is: what should your own logger target?</p>
<p>If you target <code>net46</code> and try to use your logger from the .NET Core SDK you'll get a runtime error. Likewise, if you target something like <code>netstandard2.0</code> you'll get a runtime error from the Visual Studio MSBuild. It turns out there is <em>one</em> target that both versions of MSBuild have in common: <code>netstandard1.3</code>. If you target your logger to <code>netstandard1.3</code> you'll be able to use a single assembly for either MSBuild. However, if you need your logger to use APIs that aren't in .NET Standard 1.3 then you'll need to multi-target your logger and use whichever assembly is appropriate for the version of MSBuild you're using it with.</p>
<h1 id="multi-processor-logging">Multi-Processor Logging</h1>
<p>So far I've just discussed logging a nice linear sequence of events. That all <a href="https://docs.microsoft.com/en-us/visualstudio/msbuild/logging-in-a-multi-processor-environment">goes out the window</a> when performing multi-processor builds. I'm not going to dive into that, at least not in this post, but it's worth keeping in mind.</p>
<h1 id="logging-out-of-process">Logging Out Of Process</h1>
<p>The last thing I want to talk about is the potential for responding to MSBuild logging events from another process, either on the same system or even over a network. MSBuild doesn't have a built-in capability for this, so I wrote a library called <a href="https://msbuildpipelogger.netlify.com/">MsBuildPipeLogger</a> that can do this over an anonymous or named pipe. It abstracts the pipe mechanics from you, so you just need to create an instance of a server class and then add the <code>MsBuildPipeLogger.Logger</code> to MSBuild. The MsBuildPipeLogger server then allows your application to receive MSBuild logging events as the build proceeds. The MSBuildPipeLogger server also implements <code>IEventSource</code> so that you can connect a normal MSBuild logger to it as if it were running in-process directly from MSBuild.</p>
The personal blog of Dave Glick
https://daveaglick.com/posts/announcing-discoverdotnet
Announcing Discover .NET
2018-05-22T00:00:00Z
<p>After what seems like an eternity in development, I am thrilled to announce the launch of <a href="https://discoverdot.net/">Discover .NET</a>. The site is an attempt to improve discoverability in the .NET ecosystem by collecting information on topics like projects, issues, blogs, groups, events, and resources.</p>
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Discoverability is definitely part of the equation. How can we expose other devs who would get value, especially those who aren’t on the social sites, to cool projects like yours? Still lots of room for improvement in that area.</p>— Dave Glick (@daveaglick) <a href="https://twitter.com/daveaglick/status/950883853715025920?ref_src=twsrc%5Etfw">January 10, 2018</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>I built this site for a few reasons, some community focused and others related to my own interests like static sites:</p>
<ul>
<li>Make it easier to learn about .NET stuff you may not have known about.</li>
<li>Collect a comprehensive database of information on all things .NET.</li>
<li>Demonstrate to myself and others what can be accomplished with data-centric static sites.</li>
<li>Provide an example of how <a href="https://wyam.io/">Wyam</a> can be used to power highly customized static sites.</li>
</ul>
<p>I’ll talk more about the technical nature of the site and those last two goals in a follow-up post, but for now I’d like to focus on the community aspects of the site. If you’d like skip the details on different areas of the site but want to know how to help, <a href="https://daveaglick.com/#call-to-action">skip ahead to the call to action</a>.</p>
<p>And a quick note: please don't take missing projects, blogs, events, etc. as even remotely personal. I've been slowly adding items for months and at some point I realized I would have to just ship the thing or it would never get out the door. I'll continue to add items, but now I also <a href="https://discoverdot.net/suggest/">need your help</a> to make sure we catalog everything out there.</p>
<h1 id="daily-discovery">Daily Discovery</h1>
<p>This is where the idea for Discover .NET started and it grew from there (scope creep is a Real Thing That Happens). The daily discovery is a curated link to a project, blog, or other resource that you may not have seen. While some of the discoveries will be well known within the community, an emphasis will be placed on lesser-known resources. If you want to stay updated on discoveries, a feed is available.</p>
<h1 id="projects-and-issues">Projects and Issues</h1>
<p>It became clear that gathering and presenting project information for the daily discovery could be extended to a sort of database across all .NET projects. One of the neat things about the site is that it integrates with GitHub and NuGet so that minimal information needs to be provided about a project to properly index it.</p>
<p>To make the project database more useful, a variety of sorts and filters were added including distinguishing between <a href="https://discoverdot.net/projects/?filter-microsoft">Microsoft-sponsored projects</a>, <a href="https://discoverdot.net/projects/?filter-netplatform">.NET platform projects</a> (projects that are considered “part of the platform”), and <a href="https://discoverdot.net/projects/?filter-netfoundation">projects in the .NET Foundation</a>.</p>
<p>One of the more novel things about the site is how it deals with project issues. <em>Every</em> open issue from every project is aggregated and presented on the site. Since Discover .NET is first and foremost designed to enhance community discoverability and participation, one of the goals of aggregating all the issues is to emphasize <a href="https://discoverdot.net/issues/?tab=helpwanted">help wanted issues</a> and <a href="https://discoverdot.net/issues">recent issues</a>. <a href="https://github.com/up-for-grabs/up-for-grabs.net/issues/323">I’ve had an interest in doing something like this for years</a> and am particularly proud of how well it turned out.</p>
<h1 id="blogs-and-posts">Blogs and Posts</h1>
<p>So much good information is communicated through blogs, but there are only a handful of ways to become exposed to blogs you may not otherwise have known to visit or keep up with. Curated post lists like <a href="https://www.alvinashcraft.com/">Dew Drop</a> and <a href="http://blog.cwa.me.uk/">The Morning Brew</a> are a great way to keep up, as are platforms like <a href="https://www.reddit.com/r/csharp/">Reddit</a>. However, all of these aim to distill blog posts to the most relevant and as far as I know there’s no good comprehensive collection of blogs and posts across the .NET community.</p>
<p>Discover .NET collects <em>every</em> <a href="https://discoverdot.net/blogs">blog and all their posts</a>. This information is made available as <a href="https://discoverdot.net/#recent-news">a list of recent posts from all blogs</a>, <a href="https://discoverdot.net/feeds">feeds you can subscribe to</a>, and <a href="https://discoverdot.net/search">searching capabilities</a>.</p>
<h1 id="broadcasts-and-episodes">Broadcasts and Episodes</h1>
<p>Podcasts and other types of broadcasts like YouTube tutorials and live coding screencasts are becoming more popular. In addition to blogs, Discover .NET also collects <a href="https://discoverdot.net/broadcasts">broadcasts and their episodes</a>.</p>
<h1 id="recent-news">Recent News</h1>
<p>To help keep you up to date on everything going on in .NET, recent posts and episodes from all blogs and broadcasts are presented on the homepage as well as available <a href="https://discoverdot.net/feeds">via feed</a>.</p>
<h1 id="groups-and-events">Groups and Events</h1>
<p>All of this online community is great, but this wouldn’t be a comprehensive resource without also including the real-world parts of the community. The Meetup API is used to pull all .NET related groups (using the “.NET” topic) and then combines them with data on other non-Meetup-based groups for a full picture of everything going on. <a href="https://discoverdot.net/groups">Groups are presented on a map and can be sorted and filtered by name or location</a>.</p>
<p>Likewise, the next event from Meetup groups as well as conferences and other types of events <a href="https://discoverdot.net/events">are presented in a similar way</a>.</p>
<h1 id="resources">Resources</h1>
<p>All of this data is great, but not everything valuable to the community fits into one of these clean categories. <a href="https://discoverdot.net/resources">The resources section</a> includes other links like commercial products, web sites, and anything else that the community might find valuable.</p>
<h1 id="search">Search</h1>
<p>I’m particularly fond of the <a href="https://discoverdot.net/search">search feature</a> of the site. It lets you locate content across all the different data types. For example, <a href="https://discoverdot.net/search?query=Blazor">searching for “Blazor”</a> yields some interesting issues, blog posts, and podcasts. More on how this works in a following post.</p>
<h1 id="feeds-and-api">Feeds and API</h1>
<p>One hope I have for the site is that it extends beyond your browser. I’d love for you to be able to get the information you need when and where you want it. To this end, <a href="https://discoverdot.net/feeds">several RSS and Atom feeds</a> are available. There’s also an <a href="https://discoverdot.net/api">API</a> and I’d love for the community to use it and build interesting tools using all this data.</p>
<h1 id="looking-ahead">Looking Ahead</h1>
<p>This is an ongoing project and I have <a href="https://github.com/daveaglick/discoverdotnet/issues">lots of ideas</a> for future improvements. A couple items I’d like to add soon are <a href="https://github.com/daveaglick/discoverdotnet/issues/15">support for multiple NuGet packages per-project</a>, <a href="https://github.com/daveaglick/discoverdotnet/issues/23">support for Chocolatey packages</a>, and a <a href="https://github.com/daveaglick/discoverdotnet/issues/28">Twitter bot</a> that automatically posts content from the site. I’d love to hear what you think should be added, so <a href="https://github.com/daveaglick/discoverdotnet/issues/new">file an issue</a> if you’ve got any ideas.</p>
<h1 id="call-to-action">Call To Action</h1>
<p>I need your help! This is a site for and hopefully by the community. Gathering the initial data was really hard. Even though it’s easy to add any particular resource to the site, collecting hundreds of items took a lot of time. Now that the site is live, I’m hoping the community can help scale data collection. <a href="https://discoverdot.net/suggest/">Go here for instructions on how to suggest new content</a>. If you’re interested in taking an even more active roll, <a href="https://twitter.com/daveaglick">drop me a line</a>.</p>
The personal blog of Dave Glick
https://daveaglick.com/posts/blazor-razor-webassembly-and-mono
Blazor, Razor, WebAssembly, and Mono
2018-04-24T00:00:00Z
<p><a href="https://github.com/aspnet/Blazor">Blazor</a> is an exciting new web framework from the ASP.NET team that uses <a href="https://github.com/aspnet/Razor">Razor</a>, <a href="http://webassembly.org/">WebAssembly</a>, and Mono to enable the use of .NET on the client. There’s been a lot of excitement about the possibilities this presents, but there’s also been just as much confusion about how these various parts fit together. In this post I’ll attempt to clarify things and show you exactly what each of these technologies do and how they work together to enable .NET in your browser.</p>
<h1 id="how-javascript-works">How JavaScript Works</h1>
<p>Before we start examining some of the more recent pieces of this puzzle, it’ll help to take a step back and look at what happens inside your browser when it loads and evaluates JavaScript code:</p>
<img src="/posts/images/js.png" class="img-fluid" style="margin-top: 6px; margin-bottom: 6px;">
<p>Inside every browser is a <em>JavaScript runtime</em> (or <em>engine</em>) that's responsible for turning your JavaScript into something that can be evaluated. It's often referred to as a <em>virtual machine</em> since it presents a well-defined boundary in which the code is evaluated and isolates that evaluation to a specific sandboxed environment. This diagram is a gross oversimplification of modern JavaScript engines, but they all generally consist of three stages:</p>
<ul>
<li><strong>Parser</strong> - Performs <a href="https://en.wikipedia.org/wiki/Lexical_analysis">lexical analysis </a> on the JavaScript code and converts it into tokens (small strings with specific meaning). The tokens are then reassembled into a syntax tree that gets used in the next step.</li>
<li><strong>Compiler</strong> - Transforms the syntax tree into bytecode, which is a low-level representation of the code that the interpreter can quickly understand and evaluate.</li>
<li><strong>JIT</strong> - A just-in-time interpreter that takes the bytecode and evaluates it on the fly at runtime, thus executing your code.</li>
</ul>
<p>I'm sure I've misrepresented or totally missed certain subtleties of this process, so if you see anything glaringly wrong please sound off in the comments. The important point here is that the JavaScript engine that exists in every browser takes your JavaScript code, figures out what it means, and then evaluates it inside the browser.</p>
<h1 id="how-webassembly-works">How WebAssembly Works</h1>
<p>WebAssembly is described by the official site as:</p>
<blockquote>
<p>"WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine”</p>
</blockquote>
<p>That’s not particularly helpful since it’s intentionally abstract to allow for future implementation changes. What’s important for our purposes is to understand how WebAssembly interacts with the existing JavaScript support that’s already in your browser. Here’s that chart again with the addition of WebAssembly bits:</p>
<img src="/posts/images/webassembly.png" class="img-fluid" style="margin-top: 6px; margin-bottom: 6px;">
<p>The thing to notice here is that the WebAssembly code is fed directly into the JIT compiler of the JavaScript runtime. That's because <a href="http://webassembly.org/docs/modules/">WebAssembly modules</a> have already been compiled into a form of JavaScript bytecode that modern WebAssembly-supporting JavaScript engines can evaluate in their JIT component. The takeaway here is that WebAssembly is <em>related</em> to JavaScript as it pertains to runtime evaluation, but isn't itself JavaScript. This is a common misconception. WebAssembly is not a transpiler like TypeScript, CoffeeScript, etc.</p>
<h1 id="mono">Mono</h1>
<p>Recall that I mentioned Mono at the beginning of this post. It’s arguably the most important part of the .NET-in-the-browser story but it’s probably also the least understood.</p>
<p>In order to evaluate .NET assemblies in a web browser, we need something that's been compiled for WebAssembly that knows what to do with .NET assemblies and IL. In other words, we need a .NET runtime that's been compiled to WebAssembly. When Blazor was first starting out, Steve Sanderson found that he could compile a small, portable, open source .NET runtime called <a href="https://github.com/chrisdunelm/DotNetAnywhere">DotNetAnywhere</a> to WebAssembly without too much trouble:</p>
<blockquote>
<p>Blazor runs .NET code in the browser via a small, portable .NET runtime called DotNetAnywhere (DNA) compiled to WebAssembly</p>
</blockquote>
<p>Unfortunately <a href="http://blog.stevensanderson.com/2017/11/05/blazor-on-mono/">that didn't scale very well</a>. Thankfully for us, Microsoft already owns an open source, cross-platform, highly-portable .NET runtime. No, not .NET Core. I'm talking about the <em>other</em> open source cross-platform .NET runtime: Mono. Even better, <a href="http://www.mono-project.com/news/2017/08/09/hello-webassembly/">the Mono team had recently accounced</a> they were working on getting Mono to compile to WebAssembly.</p>
<p>While the Mono team continues to address bugs and corner cases, the runtime already works very well on WebAssembly. One important point is that this still has nothing to do with Blazor (other than maybe some incentive). The Mono WebAssembly runtime is totally independent of Blazor and can be used by anyone to evaluate .NET assemblies in the browser. In fact, other projects like <a href="https://github.com/praeclarum/Ooui">Ooui</a> have already started to leverage it.</p>
<p>It's also important to note that this is a full .NET runtime that evaluates .NET assemblies. Unlike the WebAssembly support that compiled languages like C++ and Rust are exploring where the application itself is compiled to WebAssembly, the Mono bits are the only thing that needs to be compiled to WebAssembly. Your own .NET assembly will "just work" when it's loaded and interpreted by the Mono runtime. All that said, the Mono team is also exploring a precompilation scenario for enhanced performance. In that mode, you would essentially compile your .NET code along with the Mono runtime directly into WebAssembly bytecode.</p>
<h1 id="blazor">Blazor</h1>
<p>All of this sets up the exciting work going on in Blazor itself. Blazor is the name of a project that includes both a runtime component and various tooling. The tooling helps produce the assemblies that the runtime bits know how to work with. What gets delivered to your browser looks like this:</p>
<img src="/posts/images/blazor.png" class="img-fluid" style="margin-top: 6px; margin-bottom: 6px;">
<p>There's a lot going on here so let's examine each part:</p>
<ul>
<li><strong>Blazor Page</strong> - The HTML file that Blazor produces is really simple. It basically just includes CSS files and headers as well as a couple JavaScript files to help bootstrap the WebAssembly support (WebAssembly modules currently have to be loaded by JavaScript).</li>
<li><strong>blazor.js/mono.js</strong> - These JavaScript files are responsible for loading the Mono WebAssembly module and then giving it your Blazor application assembly. They also contain support for features like JavaScript interop.</li>
<li><strong>mono.wasm</strong> - This is the actual Mono WebAssembly .NET runtime that <code>mono.js</code> loads into the browser. It is basically Mono compiled to WebAssembly.</li>
<li><strong>mscorlib.dll, etc.</strong> - The core .NET assemblies. These need to be loaded just like any other .NET runtime otherwise you'll be missing key parts of the .NET <code>System</code> namespace(s).</li>
<li><strong>myapp.dll</strong> - Your Blazor application which was processed by the Razor engine and then compiled by the Blazor tooling. Today the tooling exists as MSBuild tasks that get added to your project by the Blazor NuGet package.</li>
</ul>
<p>The end result is Razor and C# in your browser! To learn more about Blazor from a developer perspective, check out <a href="https://learn-blazor.com/">https://learn-blazor.com/</a>.</p>
The personal blog of Dave Glick