Git TFS Visual Studio Integration

Git TFS Visual Studio Integration

As Git repositories are getting more popular, I write this post in the aim to review the two main type of repositories (Version Control System), what is Git and how they are supported by Team Foundation Server and Visual Studio.

The two types of Version Control Systems (VCS)are:

1. Centralized Version Control System (CVCS)

Centralized Version Control Systems follow a client/server architecture. The repository lives on the server, a central location that all collaborators connect to do all or most of their repository operations. The code history consequently resides on the central server.

This centralized architecture of Version Control is implemented in Team Foundation Server by the Team Foundation Version Control – TFVC implementation. It is how TFS has been implemented since its first release, Team Foundation Server 2005.

1.1. TFVC Server Workspace: Check-in / Check-out Model

All developers are familiar with traditional Version Control Systems such as CVS, SVN, Visual SourceSafe, Team Foundation Server (TFS) to name some of the most popular. They are all Centralized Version Control Systems (CVCS) and have in common that we must ask the central system (the repository) permission to edit a file.
When we get the code base from the central repository, all files are ready-only. We must then ask the server authorization to edit the file by executing a check-out operation. If authorization is granted by the central repository, the read-only flag on the file is removed. We edit the file and when we want to integrate it with the code base on the server, we do a check-in operation. If nobody else changed the file, it is saved in the repository right away, otherwise we need to resolve conflict before saving. This is what is called the “Check-in / Check-out” model.
The check-in / check-out models is thus not designed to work offline as we need access to the repository server to be able to work.
This is the model that has been around since the first Version of TFS (2005) and also in its predecessor Visual SourceSafe.

Note on working offline with server workspace:
As server workspace always need to be connected to TFS, opening a Visual Studio solution without having access to TFS will prompt the user to go in offline mode which then allows us to edit the file, but it is not a seamless experience as we need to do some extra manual step: choose to “go offline” -> overwrite file -> go online -> reconcile -> check-in. As we can see, the Check-in / Check-out pattern for working offline is broken and so it is not an ideal situation. If we need to work offline, it would be better to have a different pattern that works the same way whatever we are online or not. This is what the Edit / Commit Model tries to solve for CVCS. DVCS solves that even better due to its architecture.

1.2. TFVC Local Workspace: Edit / Commit Model

Up until TFS 2012, the check-in / check-out model was the only model supported. There was only one type of workspace, the server workspace which was simply called “workspace” as no other type of workspace existed yet. If the old documentation (for TFS 2005, 2008, 2010) sometimes talk about “local workspace” it only means the folder on your local drive but the workspace itself is actually a server workspace when translated in the new nomenclature.
Since TFS 2012, it is possible to use local workspaces which allows working with TFS temporarily offline while offering a better user experience than with server workspace. Note that due to their client / server architecture, CVCS are not designed for working offline so it will never solve all problems and does not offer a full offline experience but it nevertheless provides a more seamless one.

When we get the code base from the central repository, all files are writeable (no read-only file attribute) which means that it is possible to edit a file without asking permission to the server. While offline, we can also undo uncommitted changes locally and compare the current version of a file with the last version we had before starting to modifying it.
Nevertheless, commits are executed against the central repository (TFS). So, while local workspaces allows us to work disconnected from the server by allowing us to edit files, it is still a Centralized VCS and we must commit to the server. As a result, file change history is still stored centrally in the server. Consequently, many operations cannot be executed offline: creating branches, looking at file history, etc… and therefore local workspaces offer limited offline features beyond being able to edit files while offline.
The list of supported local operations is detailed in the following blog post: Server workspaces vs. local workspaces

On a performance point of view, a disadvantage of local workspaces is that when committing changes to the central repository, the tool will have to scan the entire copy of the code to be able to see which file has changed. This makes that local workspaces do not scale as well as server workspaces for very large code base. By that Microsoft means workspace containing more than 100,000 items! A quite uncommon scenario.
To use local workspaces in Visual Studio, we just need to go to edit the workspace, go to the advanced section and change the Location property to Local. See instructions on MSDN: Create and work with workspaces

Local Workspace vs Server Workspace – best practice:
While local workspaces have limitation when working offline (especially compared to DVCS), it still gives advantages compared to working with server workspaces. Microsoft actually recommends using local workspaces instead of server workspace. See MSDN article Decide between using a local or a server workspace

2. Distributed Version Control System (DVCS).

Distributed Version Control Systems follow a distributed architecture where there is no technical difference between any node running the Version Control System. The complete repository system is distributed among each user by which we mean that every user has a complete copy of a fully functional repository. There is no concept of client having to connect to a server to commit code. This fully functional repository is named the local repository as it is local to every team member.
Advantages of distributed repositories is tat we can do many more things offline than with TFS’ local workspaces. We can create branches, compare version of files, commit changes and so on while being offline. It is worth repeating that every developer has a full Version Control System working on their machine as it is quite a departure from preconceived ideas and old habits that a Version Control System should have a central storage.
Another advantage of DVCS is its performance: as everything is local, no operation has network overhead and no central system must share its resource to various users. Most operations will seem instantaneous.
Having a full copy of the repository also means that when we get code from a repository, we get all the files but also all the history – this can be slow the first time if the project is already large and has a lot of history, but it’s only on the first time as afterwards only changes will be exchanged between repositories .

2.1. Collaboration with DVCS

As everybody commit code locally, integrating everyone’s code is made in a separate operation. In contrast, for a CVCS such as TFVC, integration is made together with the check-in operation (merge operation).
Due to the distributed nature of the DVCS (comparable to peer-to-peer technology), programmers could theoretically collaborate by exchanging set of changes directly from one another. This would nevertheless prove cumbersome in practice as we would have to understand what other team members are working on and everybody would have to keep their computer online all the time so that we can get changes from them at any time we want. This last point would defeat one of the purpose of a DVCS which is being able to work offline.
Therefore, in practice, a repository node is set up and dedicated to code integration so that collaboration is eased. That intermediate is called the remote repository (as opposed to everyone’s local repository). All collaborators use the remote repository to push their set of changes to and to pull other people’s set of change. As the remote repository is in the form of a server supposed to be running all the time, collaboration is always possible no matter which collaborators are offline.

3. CVCS vs DVCS, how to choose?

CVCS suits best non-distributed team; meaning team members are (mostly) always online and have access to the CVCS server (mostly) all the time with a reliable and fast network connection. A typical example is a team within an office having TFS running in the corporate network.
With a CVCS, we need to be online to be able to work (so that we can check-out files). Local Workspaces solve this limitation but bring other limitations as they do not offer a full offline experience. Local workspaces still use TFVC (Team Foundation Version Control), which is a CVCS and will never be able to offer full offline experience due to its architecture.
In short: with CVCS, team members develop online.

DVCS suits best for highly distributed teams; meaning that team members can be geographically distributed and/or offline for some time or not having fast or reliable internet connection. DVCS permits to develop offline with a fully feature Version Control System and so we don’t have to be online to work, we only need to be online when we want to integrate our code with the code of other team members.
In short: with DVCS team members develop offline and integrate online.

4. What is Git?

Git is a Free and Open Source Distributed Version Control System.
Git is probably the leading DVCS tool and has great support amongst the open source community and across platforms (Linux, Mac, Windows). It is also supported in Visual Studio since Visual Studio 2012 Update 3.

Apart from being distributed, the main difference between Git and a standard control system (SVN, TFVC…) is that versions are not made of file-based changes, instead each version is a snapshot of a mini filesystem. See the Git Basics documentation for more information about Git’s storage mechanism.
If you want to learn more about Git, you can start at the Git documentation.

5. Git and Visual Studio

  • Visual Studio 2010 and under: Git is not officially supported by Microsoft.
    A Visual Studio extension nevertheless exists on CodePlex for Visual Studio 2010 and 2008. It is called the Git Source Control Provider and is available here (look for the “How to use” section which describes to install it through Visual Studio Tools Extension Manager by searching the online gallery for “Git Source Control Provider”)
  • Visual Studio 2012: Git is officially supported by Microsoft.
    Git repositories are supported by Visual Studio 2012 (uppdate 3 or higher) but it is not included out-of-the-box with the product: A Visual Studio extension for Team Explorer to provide source control integration for Git needs to be installed. This extension has been implemented by people working in the TFS team at Microsoft.
    The Visual Studio 2012 extension is available here: Visual Studio Tools for Git and is in the form of an executable windows installer file: Microsoft.TeamFoundation.Git.Provider.msi.
    The only prerequisite to install the extension is that Visual Studio 2012 Update 3 or higher is installed. At time of writing Visual Studio 2012 Update 4 is available here.
    Here is a screenshot of installing the Visual Studio 2012 extension:
    Visual Studio Tools for Git Setup
  • Visual Studio 2013 ad above: Git is officially supported by Microsoft.
    Git repositories are supported out-of-the-box in the product and no extension is required.

6. Git and TFS

Team Foundation Server 2013 is the first version of TFS giving the choice to run as a CVCS (TFVC repository) or DVCS (Git repository). TFS 2013 is thus able to host standard Git repositories.
An advantage of using a Git repository (on top of being distributed) is that it enables the numerous third party tools and development environment supporting Git to use TFS Git as their repositories. For example, a company working with multiple technologies could use TFS to host all their code regardless they are web applications developed using Visual Studio or iOS apps developed with Xcode.
It is great to see that TFS 2013 and Visual Studio support standard Git so that it is fully interoperable with other platforms and tools. It looks like Microsoft embraced the open source standards instead of writing their own Distributed Version Control System or worst, their own flavour of Git.

7. TFS in the cloud – Visual Studio Online

Microsoft has also launched the availability of a web-hosted version of TFS in the cloud. It was previously called Team Foundation Service and has now been renamed Visual Studio Online. As with on-premise TFS, it is an ALM (Application Lifecycle Management) on top of being a Version Control System. It is available on www.visualstudio.com.
A Microsoft Account (aka a Windows Live ID) is used to create a Visual Studio Online account. Basic accounts are free for up to 5 users (every additional users must be paid). Different account types exist if more features/tools are necessary. If a team needs a paid account, the Visual Studio Online account will have to be linked with a Windows Azure account through which billing will occur. For more information on pricing, see: Visual Studio Online Pricing Details.
As with TFS 2013, Visual Studio Online can host either a TFVC or a Git repository.
In the screenshot hereunder, we can see that when creating a new Team Project on Visual Studio Online, we can choose the Version Control type, either TFVC or Git:
Visual Studio Online New Team Project

How to see the value returned by a method in Visual Studio Debugger?

In Visual Studio’s Debugger, it has always been a pain to not be able to see the value returned by a method if no variable was assigned to it.
Indeed, the debugger would not give a way to see the value returned by the following method:

static string Concat(string p1, string p2)
{
    return p1 + p2;
}

The only way to see the value before it is returned by the method would be to modify (!) the source code and assign a variable to the result before returning it. This is less than ideal as it means we have to stop the debugging session, modify code and restart debugging. It has unfortunately been a trick many developers have had to use throughout the years.
If we take the example here above, we would change the method to something like the following to be able to see its return value at debug time:

static string Concat(string p1, string p2)
{
    string value = p1 + p2;
    return value;
}

 

The good news is that starting from Visual Studio 2013 we do not have to do such trick anymore; we can directly examine the return value of a method even if there is no variable holding the value. We can do that at 2 moments in time:

  1. Just before the method returns: When we step over to the end of the method (the curly brace at the end of the method definition).
  2. Just after the method returns: When we step out of the method (back to the line of code calling the method). Note that the return value will be lost after the next Debugger Step Over.

There are 2 places where we can see the method return value:

  • In the Debugger Immediate window, using the $ReturnValue keyword. To open the Immediate window while debugging, choose Debug -> Windows -> Immediate (or press keyboard shortcut: Ctrl + Alt + I).
  • In the Debugger Autos window. To open the Autos window while debugging, choose Debug -> Windows -> Autos (or press keyboard shortcut: Ctrl + Alt + V, A).

 

Example when Stepping Over to the method end.

The return value will only be visible when we step over to the end of the method (on the curly braces ending the method definition).

In the picture hereunder, $ReturnValue does not return any value as we have not yet reached the end of the method.
Method Return Value Before Method End

In the picture hereunder, we are at the end of the method (on its ending curly brace). At this stage the debugger knows the value returned by the method and $ReturnValue is populated by the debugger. We can enter $ReturnValue in the debugger Immediate window and see that it holds the value returned by the method. The return value is also visible in the debugger Autos window at the entry with the word “returned” in it. The entry is only a placeholder to show the object instance returned by the method (in this case a string) and does not correspond to any actual variable name.
Mothod Return Value At Method End

 

Example when Stepping Out of the method.

In the picture hereunder we have just stepped out of the method which we want to know its returned value. We can see that the return value is made available by the debugger in both the Immediate and the Autos window.
Return Value When Step Out Of Method

Once we step over to the next line of code in the debugger (F10 shortcut), the entry in the Autos window for the return value is lost and in the Immediate window, $ReturnValue does not evaluate to anything anymore.
No Return Value After Step Out Of Method

Reference: http://msdn.microsoft.com/en-us/library/dn323257.aspx

How to recursively delete files but keep the directory structure through the Windows command prompt?

The other day, I had to clear a structure of hundreds of folders containing over 60Gb worth of logs that we moved to an archive file server. After copying the logs to the archive location, I had to delete all the files while keeping the folder structure. Being accustomed to high level languages for too long, I first wondered how to do this in a batch file. It turns out that the forfiles command would do the task rather simply and elegantly:

forfiles /p D:\Archive\ /s /c “cmd /c IF @isDir EQU FALSE (del /Q /F @file)”

forfiles parameters:
/p -> Path in which the forfiles command will search for files.
/s -> Orders the forfiles command to execute recursively in all subdirectories.
/c -> Command to execute for each file found by forfiles. The command must be in quotes and start with cmd /c.

As forfiles returns all files and folders found within the given path, I simply had to check that the file found is not a directory before deleting it silently using the /Q parameter (the /F parameter forces the deletion of read only files).

For the 60Gb worth of data I had, the command ran for a good 15 minutes and the job was done (no heavy CPU usage or anything else noticeable so it is safe to use even on production environment).

How to modify WCF services previously published with the BizTalk WCF Service Publishing Wizard?

The BizTalk WCF Service Publishing Wizard is the tool used to easily publish a WCF Service implemented in BizTalk (typically through an orchestration). See the Publish WCF Services section of the BizTalk documentation for some background information if you are not used to the process of publishing WCF services in BizTalk.

In short, the main output of the Wizard is a Web Application containing the Web Services defined by the user in the Wizard plus a bunch of definition files, and schemas. See Publishing WCF Services with the Isolated WCF Receive Adapters for details of the output produced.

The main problem when using the Wizard from the Program Files menu is that it always start empty. If you have previously published a WCF service through the wizard and wish to modify it, you will always have to redefine the existing services and methods from scratch. This is quite inefficient as it is common at development time to have to either:

  • Add new services to the site (in case it is hosted in IIS).
  • Add new web methods on an existing service.
  • Change a schema (which is probably the change we do the most often).

It would thus be particularly tedious to have to redefine completely a WCF service each time we need to modify it.

One of the files produced by the Wizard is WcfServiceDescription.xml (located under \App_Data\Temp). As explained in Publishing WCF Services with the Isolated WCF Receive Adapters on MSDN, it is an XML file that summarizes the settings used when defining the WCF services in the Wizard.

Luckily, it is possible to feed this file back to the Wizard when running it again so that all the existing services and methods can be pre-populated. This is a great time saver at development time as more often than not, methods and contracts (schemas) are changing regularly.

Nevertheless the tool is far from perfect and I had to deploy the BizTalk assembly containing the updated schema to the GAC so that the Wizard would see it. Having the assembly compiled and picked up by the wizard’s file dialog box did not seem to work properly as I could only see the schemas that were already in the GAC from a previous deployment.

The way to do that is to launch the wizard from the command line by using the following syntax:
BtsWcfServicePublishingWizard.exe -wcfServiceDescription=C:\FolderPath\App_Data\Temp\WcfServiceDescription.xml

BtsWcfServicePublishingWizard.exe is located right in the folder where BizTalk is installed: “C:\Program Files (x86)\Microsoft BizTalk Server 2010” on my 64 bit machine.
The only shortcomings I have noticed so far are:

  1. In the wizard, if you chose to create receive locations in a BizTalk application, it will attempt to create all the receive locations defined in the wizard. If any of the receive locations already exist in the BizTalk application (from a previous run), the creation of ALL the receive locations will fail. Therefore, none of the new receive locations will be created while the already existing receive locations obviously still exist in the BizTalk Application. This does not mean that the wizard fails, it still succeeds. We can thus grab the new BindingInfo.xml generated by the wizard, extract the new ports and import them separately through the BizTalk Admin console. Alternatively, it is also possible to simply delete the pre-existing receive locations before running the wizard.
  2. The wizard does not repopulate the target namespace of the generated WCF services, it will default back to http://www.tempuri.org/. The work around is to pick it up beforehand from the service’s wsdl. When opening the wsdl, just look for the “targetNamespace” attribute in the element <wsdl:definitions> , take its value and paste it back in the Wizard.

Anyhow, even with these shortcomings, reusing the WcfServiceDescription.xml is still a great time saver!

BtsWcfServicePublishing.exe

On a side note, there is another tool similarly named, BtsWcfServicePublishing.exe which can be downloaded here (notice that there is no “Wizard” at the end of the name). As this tool does not have any GUI, it can be used to script and automate creation of WCF services for BizTalk. That can be useful for automated deployment for example. See the tool reference. As the tool was made available for BizTalk 2006 R2 (.Net 2.0 runtime), the following <startup> configuration section must be added to the tool’s config file so that it can run against BizTalk 2010 assemblies (.Net 4.0 assemblies).

<configuration>
<startup>
<supportedRuntime version="v4.0" />
</startup>
</configuration>

I have actually wrote a note about it in the MSDN documentation (see the Community Content section).

Datetime XML element converted to UTC – How to read the original time of a different time zone?

I noticed that time information is converted to UTC or to the local time zone when converting XML messages elements of the datetime XML type to a DateTime .Net type. The side effect is that the original time is lost and can’t be recovered.

I will demonstrate this through a scenario and then postulate conclusion and best practice to keep in mind. As this post got a little lengthier than expected, you can jump right to the summary section if you just want the facts.

 

Scenario

I had an orchestration for which I had to read a datetime XML element from an incoming message and put its value in a user friendly string message which would ultimately be visible in an application front-end. The datetime element was made a distinguished field so that it would be easier to access.  When I did a ToString() on the distinguished field, the time part was modified to reflect UTC time instead of the original time from the XML datetime element. This was a problem as the log message had to reflect the actual time of the original message regardless of the time zone.

I wrote a little application to study what was going on through a few illustrating cases. The application has an Order Message containing a <ProviderTime> datetime element with a value using the time zone of Bangkok (UTC+7).

 

Case 1: Get the value from a distinguished field.

In this case I simply mark the datetime XML element as a distinguished field (in the message’s schema) and assign it to a DateTime .Net variable in an expression shape (* – See the footnote on a remark about this).

Input XML:

<ProviderTime>2012-09-22T21:30:00.000+07:00</ProviderTime>

Result:

Calling the ToString() method on the DateTime variable prints: 9/22/2012 2:30:00 PM. This corresponds to UTC time and it means that BizTalk’s runtime created a UTC System.DateTime structure when reading the distinguished field and assigning to the variable.

While it is “correct” in the sense that both the XML datetime element and the .Net DateTime structure represent the same instant in time, it was not good for me as the user expected to see  9/22/2012 9:30:00 PM.

The reason this happens is that as the System.DateTime structure does not contain any information regarding the time zone, the BizTalk runtime converts the time to UTC time. To be exact, the BizTalk runtime calls the .Net framework XmlConvert.ToDateTime(String, XmlDateTimeSerializationMode) method which creates a DateTime object and loses the original time zone information. BizTalk then converts the resulting DateTime to UTC. There is thus no way to display the time as it was in the original message.

We would not want to use a promoted property just for reading a value out of a message but if we had a promoted property, using it would produce the same result.

 

Case 2: Get the value from XPath and convert it to a System.DateTime structure

In this case I use Xpath to get the element value and parse the resulting string into a System.DateTime structure with the following mothod: System.Xml.XmlConvert.ToDateTime(String). I also tried to various overload of the ToDateTime() method.

Input XML:

<ProviderTime>2012-09-22T21:30:00.000+07:00</ProviderTime>

Result:

Calling the DateTime.ToString() method on the DateTime structure would now print: 9/22/2012 3:30:00 PM. This is UTC+1, Dublin’s time zone (+0) with Daylight Time Savings (+1 in summer). It means that the XmlConvert.ToDateTime(String) method creates a System.DateTime structure reflecting the Local Time. Note that this particular overload is deprecated and other exists, which I tried, but basically all they let you chose for is if you want to create a DateTime reflecting Local Time or UTC.

 

Using the DateTime structure is a lost cause as it is not time zone aware and I would thus never be able to hold anything else than local time or UTC time. To solve my problem, I had to do something else but did not want to do some ugly manual string parsing.

I did some research about Date and Time in the .Net framework and read from MSDN that:

DateTimeOffset should be considered the default date and time type for application development

So yes you read it correctly, System.DateTime is “sort of” deprecated! See for yourselves directly from the horse’s mouth: http://msdn.microsoft.com/en-us/library/bb384267.aspx

As The DateTimeOffset structure contains an Offset property (a TimeSpan) which represents the time difference between the time stored in the DateTime structure and UTC time, it can represent other times than local time or UTC time.

 

Case 3: Get the value from XPath and convert it to a System.DateTimeOffset structure

In this case I use Xpath to get the element value and parse the resulting string into a System.DateTimeOffset structure with the following mothod: System.Xml.XmlConvert.ToDateTimeOffset(String).

Input XML:

<ProviderTime>2012-09-22T21:30:00.000+07:00</ProviderTime>

Result:

Bingo! Calling the ToString() method on the DateTimeOffset variable would print 9.30 PM as in the original message! Now all I had to do was to use an overload of the ToString() method taking a format string to display that in a user-friendly manner.

 

Here is a screenshot of the result of my investigations with cases 1,2&3 highlighted:

datetime timezone xml element parsing

And here is a Visual Studio solution if you want to play around yourself (or for myself in the future).

 

Summary:

  1. A distinguished field on a datetime xml element creates a System.DateTime structure adjusted for UTC Time. The DateTime.Kind property is DateTimeKind.Utc. So if in a different scenario than mine, the distinguished field is always Local Time, you can use DateTime.ToLocalTime () to convert the value back to local time. In that case, the DateTime.Kind property will have the value DateTimeKind.Local. Time in original time zone is lost.
  2. Reading a datetime xml element into a string by using XPath and then converting it to a System.DateTime structure by using one of the XmlConvert.ToDateTime() method overloads creates a Local Time or UTC time structure (depending of a parameter on one of the overload). Time in original time zone is lost.
  3. Reading a datetime xml element into a string by using XPath and then converting it to a System.DateTimeOffset structure by using one of the XmlConvert.ToDateTimeOffset() method overloads keeps the original time as it holds the time zone offset information (i.e. +7 hours). We can thus either display the time of the original time zone or convert it to another time zone offset.

 

Type of XML datetime read .Net Type created Default Time zone Time in original time zone available?
Distinguished field System.DateTime UTC NO
Promoted Property System.DateTime UTC NO
XPath and  XmlConvert.ToDateTime() System.DateTime Local Time (for default overload) NO
XPath and  XmlConvert.ToDateTimeOffset() System.DateTimeOffset Original time zone YES

 

Conclusion:

  1. Do not use distinguished fields on datetime elements; use XPath and a DateTimeOffset variable instead. This is a tip I will keep in mind so that I don’t have to worry about the original time value being lost. If you decide to use a distinguished field anyway, you must be aware of its limitation and if it impacts you or not.
  2. When using a promoted property on a datetime element, be aware that its value will be converted to UTC timezone. This might be of importance on subscriptions (port filters and so on). There is no work around on this at its part of the BizTalk engine.

 

I like to use distinguished field when it makes sense because it avoids having to use xpath query all over the place which can be an annoyance when a schema change. As Microsoft advises to use the System.DateTimeOffset structure, would it not be nice to have a distinguished field assignable to DateTimeOffset variable instead of DateTime? It would make a lot of sense as the XML datetime type is time zone aware while DateTime is not but DateTimeOffset is. Making the latter type is a much better match. Anyhow it is definitely something I would put in my wish list of BizTalk features!

It might of course not be straightforward to implement this feature as the code generated by the BizTalk compiler would depend of the type of the variable you assign to the distinguished field (either DateTime or DateTimeOffset). Or maybe that some automatic/easy casting is possible, there is a lengthy article on the conversion between DateTime and DateTimeOffset on MSDN but I did not try to play around with it.

 

(*) Footnote:

While the distinguished field looks like a .Net variable accessed like an object’s member, it actually is not, it is just how it is displayed in the expression shape editor. If you have in the editor something like myVar = MyMsg.MyDistinguishedField, it is just the syntax to access the distinguished field in the expression shape. In real, the code generated from the expression shape will be: myVar = MyMsg.part.GetDistinguishedField(“MyDistinguishedField”). The method will return the correct .Net type depending of the XML type of the distinguished field.

This is why you can’t think of the distinguished field as an object member and can’t call any method such as ToString() directly on it, it will confuse the code generator and the orchestration won’t compile. Hence, we must always assign the distinguished field to a variable.