@ClemensReijnen

Recent posts

Tags

Categories

Navigation

Pages

Archive

Blogroll

    Disclaimer

    The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

    A lot of traveling past months...

    At this moment in the train to Stockholm after giving a presentation in Borlange at a Sogeti office. They launched a dedicated “Sogeti Team System Center” yesterday and I was invited to give a presentation about VSTS2010. It’s really great to see how mature their development environments are and how they use TFS. So, beside the 1,5 hour presentation [that's to long for listing but to short to discus VSTS2010] we had a lot of talks and discussions exchanging ideas and practices.

    PC040030The preparation

    PC040031  Fresh snow…

    Two weeks ago I gave almost the same presentation about VSTS2010 in Barcelona at a Microsoft Partner event, although not exactly the same didn’t showed any demo’s over there [not enough time].

    DSC_0140  chat with Sam

    DSC_0125 Some Gaudi buildings

    Before Barcelona I flew to to Los Angels for attending the PDC [just after my vacation in Nova Scotia]… So for the past few months of this year enough traveling, talks, discussions and presentations… Time to get working ;-)

    Posted: Dec 10 2008, 17:00 by Clemens | Comments (0) RSS comment feed |
    Filed under:

    Architecture Exploration with VSTA2010

    Did some playing today with the Architecture Explorer in 2010, which generates some nice interactive, navigable graphs from code or assembly.

    I used the Architecture Explorer for different kind of application architectures, actually different kind of UI implementations. Just as I did a while ago with this post Rosario - Architecture Explorer's Dependency Matrix and this very old post Visual Studio 2008 Features: Code Metrics.

    You can analyze relationships at assembly, class and namespace level. The first implementation doesn't show much magic [it’s just a drag and drop winform implementation, see previous posts] and as you probably see in the graph below, there is a form with two classes employees and and employee direct coupled to this form.

     0

    The namespace graph, doesn't add much information to it. Only that the namespaces are visualized, it just looks like an layer diagram.

    5

    the default class diagram view  gets a little bit complex due to all the methods and properties in the code behind file. Also you can see the employee class with all his properties and the very thin Employees class with only two methods

    3 

    This is another view where I selected all the interesting classes and methods.
    At the top-middle of the graph you can see very easy the interaction between employeeform, cmdLookup and the Employees lookup method.

    4

    But this isn’t a very interesting implementation so let’s look at a kind of Model-View-Controller implementation. If you have read the “Rosario - Architecture Explorer's Dependency Matrix ” post you already know that this one has a cyclic dependency.
    The namespace view nicely shows this dependency and the three namespaces used in this solution

    8

    With a more detailed view we can find the methods responsible for this dependency. [click to enlarge]. Actually all the methods on the top have a dependency with the form and with the controller.

    9

    Next a more complex implementation, adapter pattern together with the command pattern and observer. In this kind of applications the architecture explorer gets valuable, during courses I give all these implementations to the attendees and give them the assignment to figure out how the implementation works and what are the pros and cons of the implementation. This one I often have to explain, just because they only have the code. with a diagram like this… it will be easy for them [this is the namespace view].

    11

    More detailed it looks like this, more loose coupled solution and the controller doesn't know the view.

    12

    and finally an MVP implementation with interfaces…

    13

     

    So, that's it…  is it useful? yes definitely especially with more complex solution. Architecture Validation is hard, but the Architecture Explorer isn't meant for validation it’s for discovery and that works, although the default views have a lot of noise from the code behind files, properties and settings files. But, with manual selection of classes we get some nice overviews.  I do think there are some preferred filter methods/ practices when you want to find something specific, at this time I select every possible option till I find something what fits me needs… have to work on that.

    Posted: Nov 23 2008, 19:15 by Clemens | Comments (0) RSS comment feed |
    Filed under:

    VS2010 UML Profiles update… C#4.0 dynamic type in DTE

    It’s not really a UML Profile update it’s actually more about Visual Studio Addin’s.

    The UML Profile implementation and functionality in the previouse post I made with the CTP12 bits and that doesn’t work anymore in the current available CTP. Why not? VS Addin’s won’t work anymore… so you need to use a different approach to fire your commands. I explained the use of Addin and the UML Diagrams in this post “Rosario – Create Custom Team Architect UML Diagram-MenuItems ” but for some reason addin’s won’t work anymore. So, back to us progression providers, see this post “Rosario – Create your own Progression Provider

    A nice thing, found while digging in the problem, is that they are already using the C#4.0 dynamic type in the DTE namespace…

    dynamiccomand

    The use of this property should be something like this, I didn’t test this so I don’t know if this is going to work [found the get_CommandBars method using Reflector]:

    command...

    In the previous version this command looks like this [won’t compile anymore in VS2010]:

    command...old

    This is the old DTE interface…

    dynamiccomand old

     

    Anyway, not that important…  although I planned some funny use of UML Profiles for today, that one have to wait now.
    For more information on C#4.0 dynamic types see www.microsoftpdc.com and channel9.

    Posted: Nov 20 2008, 17:17 by Clemens | Comments (5) RSS comment feed |
    Filed under:

    VSTA 2010 – UML Profiles [make your own…]

    Does anyone have seen that there is the capability to attach UML Profiles to the UML Diagrams in the new CTP..?
    Just take a look at the property window of the component diagram or use case diagram or any other diagram and you can see that there are 3 out of the box ‘trial’ profiles available.

    profile1

    You can find this ‘profiles’ property on the design surface and after selecting one [or more] profiles you get a new property on the shape named ‘Stereotypes’ with values depending on the profile you selected.

    profile2

    That’s it for Profiles and Stereotypes in the VSTA diagrams. It doesn’t seem that valuable at first sight, but you can do magic with it.

    UML Profiles
    First a brief explanation of UML Profiles, better and less time consuming a copy-past from Wiki and  OMG [a little bit more fuzzy than wiki].

    A profile in the Unified Modeling Language provides a generic extension mechanism for customizing UML models for particular domains and platforms. Profiles are defined using stereotypes, tagged values, and constraints that are applied to specific model elements, such as Classes, Attributes, Operations, and Activities. A Profile is a collection of such extensions that collectively customize UML for a particular domain (e.g., aerospace, healthcare, financial) or platform (J2EE, .NET).   

      • Identifies a subset of the UML metamodel.
      • Specifies “well-formedness rules” beyond those specified by the identified subset of the UML metamodel.
        “Well-formedness rule” is a term used in the normative UML metamodel specification to describe a set of constraints written in UML’s Object Constraint Language (OCL) that contributes to the definition of a metamodel element.
      • Specifies “standard elements” beyond those specified by the identified subset of the UML metamodel.
        “Standard element” is a term used in the UML metamodel specification to describe a standard instance of a UML stereotype, tagged value or constraint.
      • Specifies semantics, expressed in natural language, beyond those specified by the identified subset of the UML metamodel. 
      • Specifies common model elements, expressed in terms of the profile.

    An UML Profile for VSTA.
    Why do you actually want a profile? you can add additional information to the diagrams and shapes and use this extra information for everything you can think of, just for visualization and communication but also for generation and validation.

    For example the component diagram is often used for visualizing the structure [components] of the solution and the interfaces between those components. as Scot Ambler writes:

    UML component diagrams are great for doing this as they enable you to model the high-level software components, and more importantly the interfaces to those components.  Once the interfaces are defined, and agreed to by your team, it makes it much easier to organize the development effort between subteams.

    While in UML 1.1 a component referred to files, UML 2.x describes them more as a autonomous, encapsulated unit with interfaces [see UML basics: The component diagram] which gives us a wider view what we can describe with it. But thinking of subteams, development effort, autonomous, encapsulated unit and file structures we can make a useful profile for our solution structure. Some time ago I wrote something about Autonomous Develop Services for SOA Projects with Team Architect and Service Factory, it’s about in what way you want to have your solution structure to be sure development teams [subteams] don’t interfere with each other.
    So, with the component diagram you can design your solution “autonomous, encapsulated units with interfaces”. With a profile attached to this diagram, which gives these units extra meaning according to the solution structure and what kind of units they are, we have valuable information how the solution should be structured, even more interesting we can make a ‘Solution Structure Generator’ for the component diagram.

    Make your own
    The UML Profiles in VSTA are XML files in the “C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies\UmlProfiles” folder. this is the .NET Profiles profile file. Just copy your own “.Profile” file to this folder and it will appear in the combo box [after restarting VS].

    profile4

    When looking at the property window the structure gets really easy clear. I selected the .NET Profiles profile at the diagram [first yellow marker] and the .NET Assembly stereotype [yellow marker] can be used with components [next yellow marker] and this stereotype has some more properties [vertical yellow marker].

    profile3

    So, now we know how to make a profile… or actually add some additional information to a diagram. We can make it useful create a menuitem for the diagram with some functionality that takes this additional information and creates the solution structure for you.

     foreach (Microsoft.VisualStudio.Uml.Classes.ProfileInstance pi in componentmodel.ProfileInstances)
                        {
                            if (pi.Name == "Project Structure")
                            {
                                foreach (var item in elm.AppliedStereotypes)
                                {
                                    switch (item.Name)
                                    {
                                        case "Single":
                                            Single(componentmodel);
                                            break;
                                        case "Partinoned":
                                            Partinoned(componentmodel);
                                            break;
                                        case "Multiple":
                                            Multiple(componentmodel);
                                            break;
                                        default:
                                            break;
                                    }

    As you can see I used the same solution structure types as described in Chapter 3 – Structuring Projects and Solutions in Source Control of the TFS Guidance from the Patterns and Practices group. [ I still like the "Partinoned" solution structure ] .

    Bb668953_pponline(en-us,MSDN_10)

    and some code to create the projects…

     foreach (ComponentProxy cp in componentmodel.ComponentProxies)
                {
                    Microsoft.VisualStudio.Uml.Classes.Element elm = cp as Microsoft.VisualStudio.Uml.Classes.Element;
                    if (cp.AppliedStereotypes[0].PropertyInstances[3] != null)
                    {
                         projectype = cp.AppliedStereotypes[0].PropertyInstances[3].Value;
                    }
                    string projectTemplate = "";
                    Solution2 soln = (Solution2)vsApp.Solution;
    
                    System.IO.FileInfo fi = new System.IO.FileInfo(soln.FileName);
                    string solutiondirectory = fi.Directory.FullName ;
    
                    switch (projectype)
                    {
                        case "ASP.NET Web Application":
                            projectTemplate = soln.GetProjectTemplate("WebApplicationProject.zip", "csproj");
                            break;

    Final Notes.
    The profile and the use of the profile used in the ‘solution structure’ example has some value, although if you want to use it in the real world it would probably more complex with regeneration, versioning and that kind of things. But that we can add, by using a very simple mechanism, additional information to the diagrams is a very powerful extensibility mechanism. Scenarios like, using a Data Modeling UML Profile for logical modeling and upgrade this model to the physical level with the Entity Framework are very interesting and I need stereotypes for the Usecase diagrams to implement the estimation scenario.

    More on this in the future…

    Posted: Nov 20 2008, 12:11 by Clemens | Comments (0) RSS comment feed |
    Filed under:

    Three Model Archetypes for Oslo…

    It took me several days and a lot of discussions at the PDC to understand the value of “Oslo” and still didn’t get it after the PDC. But, now finally after watching all the session’s / video’s, walking almost all the walkthroughs and after reading the complete documentation, I think I’m there, although still not sure...

    The piece what confused me was the first demo at the “A Lap around Oslo” session.

    demo1

    First a little background. Several weeks ago I made a post about different modeling approaches to discus Oslo’s place in the modeling world. The most important feature from Oslo, in that post, is the “Model Execution” part and I was really thinking of this feature like: draw/ write your model and it will be interpreted to running applications on the server. So, where I was looking for at the PDC, what I was expecting “how to make an interpreter for Oslo”.
    It even didn’t got more clear after asking the question “What if I want to translate the MService Model [a great example for modeling and executing services] from English to Dutch. The execution/ interpretation will break. So I have to change something in the interpreter… how?”. How can I make my external/ business DSL’s or a translated version of the MService model or a custom internal DSL which executes at runtime? there isn’t any explanation around building an interpreter for my ‘not–out-of-the-box’ model.

    It took some mind re-setting to figure out what the value of Oslo is. Finally after a deep-dive in the documentation I found three different Model Archetypes. See picture below…

    Oslo

    So after

    1. Model Data Aware Application.

    The “A Lap Around” demo type which got me confused.
    This type is the same as Steve Kelly is writing in his blog:

    Overall, it looks like Oslo is primarily just a way to provide configuration information for Microsoft applications

    When during that demo the presenters promised that they would be going to build a runtime for the model I was really excited. To bad, it turned out to be an ASP.NET Form with a grid view which reads the XML file that was generated out of the models. For sure it is pretty exciting what you can do with M, MGraph, MGrammer, MSomething and Quadrant but this was a kind of disappointment.

    This is indeed something that can be used for configuration of applications. A more interesting scenario is, the company wide use of the same data structure. But in this scenario the business problems are more challenging than the replacing of the Canonical Data Model by Oslo.

    2. Model Runtime Aware Application

    type 2, get’s more interesting, but very tied scoped.
    This Archetype is almost the same as David Chappell uses in his whitepaper “A First Look at WF 4.0, Dublin, and Oslo”.

     Oslo2

    Step 4 should be skipped for this type, after adding the extra or changed activity to the also changed workflow, the model is updated in the repository. An application which uses this model starts running with this new flow. Workflow uses XAML and this is interpreted at runtime from the repository.
    This type makes it easier to change or add flows and other kind of pieces of an application [you could for example make a model, and interpreter for the menubar]. The application understands the by Oslo system provided models. [ and which are interpreted during Runtime ].

    This archetype will help with several deployment and maintenance problems of a typical  type of business application. Business processes change overtime and the business can change them without the interference of IT people. [Workflow Foundation already has this capability but isn’t used by the business]. I can imagine that overtime more and more interpreters would be available for different kind of ‘models’ and that IT can more and more stitch everything together. Till now it’s very narrow scope, but promising. Although I think Steven still will call it “Configuring Microsoft Applications” ;-) [you are right, it is…]

    An interesting scenario for this type, is using the Oslo models together with the Team Architect Models. Probably the models in Oslo are company wide or even worldwide used, it’s a big effort to make an interpreter for these models so they will have a wide uses scope. So, when you can put constrains from the Team Architect Models [your specific solution] what the business can do with the Oslo Models so it still fits your business solution… interesting scenario, maybe later on more on this.

    3. Model Interpreted Application

    This is the Archetype I expected from Oslo. Define your language, execute it at runtime and use that model on different platforms / runtimes. Oslo, helping with building your own runtime. It is possible to accomplish this with the current bits [maybe not the platforms due to SQL Sever].

    Anyway, first let’s start playing with archetype 2. I do think that’s a valuable solution, maybe in conjunction with Team Architect…  later on with a self-made model and interpreter. So an other thought about Oslo next to hundred of thoughts already there in the cloud. … cloud?  azure? live? mesh? ssds?… everything in Oslo!

    Posted: Nov 02 2008, 18:37 by Clemens | Comments (1) RSS comment feed |
    Filed under:

    PDC Sessions Download

    I Like this picture. Found on Flickr [see all photo’s tagged PDC2008]

    2971234388_f8e4f630ef_o

     

    246x423_120g_draco_glossyblack_us

    I was LIVE at the PDC, but haven’t seen that much sessions. Most of the time I was at the VSTS and Oslo Booth, talking/ discussing with a lot of people. Main reason… all the sessions are available for download after 24 hour at Channel9.  So, home at the hotel I started the downloads…

    To bad KLM flights haven’t got any power plugs in the economy class so I had to find some kind of device which could play video’s in the airplane at the PDC, I don’t think it’s a surprise it’s a Zune 120… uploaded all the sessions and I really like the screen… even after watching more sessions than I ever could attend LIVE.

    Posted: Nov 01 2008, 11:26 by Clemens | Comments (1) RSS comment feed |
    Filed under:

    Exposing orchestrations with WCF and headers

    [There where still people at work back in Holland, while I was visiting LA to attend the PDC . One who stayed home made a really great solution for the BizTalk WCF Adapter and WSDL Headers. To interesting solution to keep it offline. So, here is a guest post from Ronald Kuijpers. LinkedIn-profile]

    Recently I was asked to expose an orchestration (or its schemas) using WCF. Due to company standards, the WCF service had to use a custom header for inbound and outbound messages. However, the orchestration did not access the header, it just had to be copied from inbound to outbound.

    The problem is that the WCF Publishing Wizard does not support headers, as can be read on msdn:

    The BizTalk WCF Service Publishing Wizard does not include custom SOAP header definitions in the generated metadata. To publish metadata for WCF services using custom SOAP headers, you should manually create a Web Services Description Language (WSDL) file. You can use the externalMetadataLocation attribute of the <serviceMetadata> element in the Web.config file that the wizard generates to specify the location of the WSDL file. The WSDL file is returned to the user in response to WSDL and metadata exchange (MEX) requests instead of the auto-generated WSDL.

    Luckily, also the answer is presented… but I don’t like to create a wsdl manually. Several other sources show how to change the generation of the wsdl. For me, the most important were written by Tomas Restrepo [here and here] and Patrick Wellink [here], but none added a header.

    For adding the header messages, I dug into the System.ServiceModel.dll.

    The second thing I wanted, going to create my own EndpointBehavior anyway, was to copy the header from the inbound to the outbound message. This way, I could concentrate in my orchestration on things the matter. This solution fits very well into the WCF architecture. Some nice blogsposts about message inspectors were written by Poalo Pialorsi [here and here].

    First, create a class that derives from BehaviorExtensionElement and implements IWsdlExportExtension and IEndpointBehavior. You have to derive from BehaviorExtensionElement to make the component configurable, implement IWsdlExportExtension to change the generated wsdl and implement IEndpointBehavior for copying the header.

       1: public class CustomHeaderEndpointBehavior : BehaviorExtensionElement, IWsdlExportExtension, IEndpointBehavior
       2:     {
       3:         #region BehaviorExtensionElement Overrides
       4:         public override Type BehaviorType
       5:         {
       6:             get
       7:             {
       8:                 return typeof(CustomHeaderEndpointBehavior);
       9:             }
      10:         }
      11:  
      12:         protected override object CreateBehavior()
      13:         {
      14:             return new CustomHeaderEndpointBehavior();
      15:         } 
      16:         #endregion
      17:  
      18:         #region IEndpointBehavior Members
      19:  
      20:
      21:  
      22:         public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher)
      23:         {
      24:             CustomHeaderMessageInspector headerInspector = new CustomHeaderMessageInspector();
      25:             endpointDispatcher.DispatchRuntime.MessageInspectors.Add(headerInspector);
      26:         }
      27:  
      28:
      29:  
      30:         #endregion
      31:  
      32:         #region IWsdlExportExtension Members
      33:  
      34:
      35:  
      36:         public void ExportEndpoint(WsdlExporter exporter, WsdlEndpointConversionContext context)
      37:         {
      38:             CustomerHeaderWsdlExport.ExportEndpoint(exporter, context);
      39:         }
      40:  
      41:         #endregion
      42:     }

    Note that only a few interface methods have to be implemented. Copying the header is taken care of by the CustomHeaderMessageInspector (credits to Poalo) :

       1: public class CustomHeaderMessageInspector : IDispatchMessageInspector
       2:     {
       3:         #region Message Inspector of the Service
       4:  
       5:
       6:  
       7:         public void BeforeSendReply(ref Message reply, object correlationState)
       8:         {
       9:             // Look for my custom header in the request
      10:             Int32 headerPosition = OperationContext.Current.IncomingMessageHeaders.FindHeader(CustomHeaderNames.CustomHeaderName, CustomHeaderNames.CustomHeaderNamespace);
      11:  
      12:             // Get an XmlDictionaryReader to read the header content
      13:             XmlDictionaryReader reader = OperationContext.Current.IncomingMessageHeaders.GetReaderAtHeader(headerPosition);
      14:  
      15:             // Read through its static method ReadHeader
      16:             CustomHeader header = CustomHeader.ReadHeader(reader);
      17:  
      18:             if (header != null)
      19:             {
      20:                 // Add the header from the request
      21:                 reply.Headers.Add(header);
      22:             }
      23:         }
      24:  
      25:         #endregion
      26:     } 

    Adding the header messages is a bit more complicated:

    1. Add the header schema;
    2. Add the header schema namespace;
    3. Create and add a header message description;
    4. Add header to operation description.

    A piece of code says more than a thousand words:

       1: public class CustomerHeaderWsdlExport
       2:     {
       3:         public static void ExportEndpoint(WsdlExporter exporter, WsdlEndpointConversionContext context)
       4:         {
       5:             // Read the schema of the custom header message
       6:             XmlSchema customSoapHeaderSchema = XmlSchema.Read(Assembly.GetExecutingAssembly().GetManifestResourceStream("CustomHeaderBehavior.CustomSoapHeader.xsd"), new ValidationEventHandler(CustomerHeaderWsdlExport.ValidationEventHandler));
       7:  
       8:             // Create the HeaderMessage to add to wsdl:message AND to refer to from wsdl:operation
       9:             System.Web.Services.Description.Message headerMessage = CreateHeaderMessage();
      10:  
      11:             
      12:             foreach (WsdlDescription wsdl in exporter.GeneratedWsdlDocuments)
      13:             {
      14:                 // Add the schema of the CustomSoapHeader to the types AND add the namespace to the list of namespaces
      15:                 wsdl.Types.Schemas.Add(customSoapHeaderSchema);
      16:                 wsdl.Namespaces.Add("ptq0", "http://.../CustomSoapHeader/V1");
      17:  
      18:                 // The actual adding of the message to the list of messages
      19:                 wsdl.Messages.Add(headerMessage);
      20:             }
      21:  
      22:             addHeaderToOperations(headerMessage, context);
      23:  
      24:         }
      25: }

    The following code generates the header message description:

       1: private static System.Web.Services.Description.Message CreateHeaderMessage()
       2:        {
       3:            // Create Message
       4:            System.Web.Services.Description.Message headerMessage = new System.Web.Services.Description.Message();
       5:  
       6:            // Set the name of the header message
       7:            headerMessage.Name = "CustomHeader";
       8:  
       9:            // Create the messagepart and add to the header message
      10:            MessagePart part = new MessagePart();
      11:            part.Name = "Header";
      12:            part.Element = new XmlQualifiedName("CustomSoapHeader", "http://.../CustomSoapHeader/V1");
      13:            headerMessage.Parts.Add(part);
      14:  
      15:            return headerMessage;
      16:        }

    The method addHeaderToOperations adds the header to the input and output message bindings using a SoapHeaderBinding.

       1: private static void addHeaderToOperations(System.Web.Services.Description.Message headerMessage, WsdlEndpointConversionContext context)
       2:         {
       3:             // Create a XmlQualifiedName based on the header message, this will be used for binding the header message and the SoapHeaderBinding
       4:             XmlQualifiedName header = new XmlQualifiedName(headerMessage.Name, headerMessage.ServiceDescription.TargetNamespace);
       5:  
       6:             foreach (OperationBinding operation in context.WsdlBinding.Operations)
       7:             {
       8:                 // Add the SoapHeaderBinding to the MessageBinding
       9:                 ExportMessageHeaderBinding(operation.Input, context, header, false);
      10:                 ExportMessageHeaderBinding(operation.Output, context, header, false);
      11:             }
      12:         }
      13: private static void ExportMessageHeaderBinding(MessageBinding messageBinding, WsdlEndpointConversionContext context, XmlQualifiedName header, bool isEncoded)
      14:         {
      15:             // For brevity, assume Soap12HeaderBinding for Soap 1.2
      16:             SoapHeaderBinding extension = new Soap12HeaderBinding();
      17:            
      18:             binding.Part = "Header";
      19:             binding.Message = header;
      20:             binding.Use = isEncoded ? SoapBindingUse.Encoded : SoapBindingUse.Literal;
      21:  
      22:             messageBinding.Extensions.Add(extension);
      23:         }

    This might seem a lot of code, but is almost a full working solution. Patricks blogpost does an excellent job of explaining how to put this to work for BizTalk.

    Posted: Nov 01 2008, 09:35 by Clemens | Comments (1) RSS comment feed |
    Filed under:

    Taking off for the PDC and an eleven hour Flight :-S

    I’m not a conference blogger, so no updates about sessions, and I don’t like flying, but that’s a different story…

    brain

    I will be staying in the Cecil Hotel downtown LA, with some interesting reviews on tripadvisor.com…  “Do Not Stay Here Under Any Circumstance.”, “Un Professional People Trying To Look Busy/Working”, “Horrible experience”. But, it’s cheap and  they are more than a year old so hopefully the hotel improved or the writers had a bad day… anyway, it’s warm and sunny in LA. 
    I will be at  http://pdc08.partywithpalermo.com/ Sunday evening and when that party takes to long, I still can watch the keynotes in bed –> Watch PDC08 Keynotes Online [probably I’m awake at 4 with a 10 hour jetlag… no need to arrange a wake-up-call]

    Posted: Oct 25 2008, 03:14 by Clemens | Comments (0) RSS comment feed |
    Filed under:

    Rosario &ndash; Create Custom Team Architect UML Diagram-MenuItems

    There are many ways, maybe to many, to extend Visual Studio.  Macros, AddIns, VSPackages, [see: Visual Studio Extensibility Demystified] GAT-GAX, et cetera and now with Rosario we get even more ways with new features like the Architecture Explorer [see Create your own Progression Provider post].

    So, we’ve got a lot of ways to add our own functionality. While the Architectural Explorer is more for visualizing the architecture of your applications [or binaries] it is also possible to make executable commands, not really the best place to do that. I think users will get lost when I hide my commands in there, they are used to the current structure of commandbars and mouse-menu items. We also could use GAT-GAX [did that with the first ideas around testcase generation] to add commands, works also pretty easy… [note: you can install all the possible powertools, AddIns, gat-gax and factories which work on Orcas also on the Rosario CTP12]. 

    commandbar1

    Anyway, because I think commands should be as near as possible at the thing they act on, for the TestCaseGeneration that would be the activity diagram, I wanted the command on the activity diagrams design surface. Not really rocket science because the Team Architects UML are based on the DSL-Tools.

    First, create an normal Visual Studio AddIn project [Creating Visual Studio Add-Ins] and add the necessary code for adding an CommandBar. The only thing you need to figure out is the CommandBar  you want to add your MenuItem at… for the activity designer this is “Activity Designer Context”, for the use Case diagrams it is “UseCaseModel Context” and so on… you can easily get all the names by iterating to the CommandBar collection.

    commandbarcreate

    Next, create the MenuItem handler, grab the current file and load the model [see next code snippet]. From here we can do anything with the UML diagram we want to do. For example for the testcase generation I only iterate trough the diagram [ see: foreach (ModelElement … in allElements) ], do some magic and create the WorkItems. But, you also can add ModelElments at the diagram, remove them or change properties.

    commandbarhandle

    Getting the model diagram is also pretty straight forward…

    commandbarload 

    Conclusion: adding MenuItems is an easy way to add functionality to your diagrams... although I have to say that this implementation is based on the Rosario CTP12 bits and I don’t expect they stay the same while the diagrams evolve. Anyway, for now, a nice way to play with the UML diagrams.

    Posted: Jul 19 2008, 15:06 by clemens | Comments (0) RSS comment feed |
    Filed under:

    Rosario &ndash; Project Estimating with Team Architect Diagrams

    An idea [and early implementation ] of our “Enable ALM by Automation” vision within Rosario.

    alm

    While VSTS with TFS is great in measuring, time tracking, project planning and other project management kind of tasks it misses the early phase where the project team needs to estimates the project. With Rosario Team Architect this important missing piece in Application Lifecycle Management can be realized, by making an “connected” viewpoint for business estimation and measurement.

    Project Estimation.
    I think, I don’t have to talk about why there is a need for project estimation [it would be a very long post], how it’s done and what needs to be in place to do it “right” [is an estimation ever right??] is more interesting.
    The classic estimation book 'Software Estimation: Demystifying the Black Art', a must read anybody interested in software estimation, is a good start in capturing the needs for a good enough software estimation implementation. The next “deadly sins” are distilled out of this presentation “10 Deadly Sins of Software Estimation” from the author “Steve McConnell”.

    • Confusing targets with estimates
    • Saying “yes” when you really mean “no”
    • Committing to estimates too early in the cone of uncertainty
    • Assuming underestimation has a neutral impact on project results
    • Estimating in the “impossible zone”
    • Overestimating savings from new tools or methods
    • Using only one estimation technique
    • Not using estimation software
    • Not including risk impacts in estimates
    • Providing off-the-cuff estimates

    Before we can use this list and look at Rosario Team Architect and what the diagrams can mean for software estimation we have to dive in to the different estimation methodologies. Lucky me somebody already did that in this paper “An Effort Estimation by UML Points in the Early Stage of Software Development”, and the writers also point to the pain point of these methodologies:

    Common problems with these approaches are lack of early estimation, over-dependence on expert decision, and subjective measurement of each metric. A new approach is required to overcome these existing difficulties. We move upstream in the software development process to requirement analysis and design.

    pros and cons of software estimation practicess

    The missing estimation style in this table is the most interesting one for us, Use Case Points. [it’s the topic of the report, so that’s the reason it’s missing]. Use Case Points is an estimation method based on UML Use Cases. From UML Distilled:

    Use Cases are a technique for capturing the functional requirements of a system.

    And requirements is exactly what we need as bases for an estimation.

    uc

    Use-Case-Points [UCP].
    So, Use Case Points is based on Use-Cases with al it’s pros and cons. But, the way it works is pretty easy. When you have your Use-Case in place you can start counting “points” the same way as with for example Function Points. Some cases are harder to implement than others so those will have more points… easy going.

    To be more precise Use-Case Points exists of:

    • the number and complexity of the use cases in the system
    • the number and complexity of the actors on the system
    • various non-functional requirements (such as portability, performance, maintainability) that are not written as use cases
    • the environment in which the project will be developed

    and ranking:

    • Rank Actors as simple (1 point), average (2 points), or complex (3 points):
      • Simple: a machine with a programmable API
      • Average: either a human with a command line interface or a machine via some protocol (no API written)
      • Complex: a human with a GUI
    • Rank use cases as simple (5 points), average (10 points), or complex (15 points):
      • Simple: fewer than 4 key scenarios or execution paths in the UC
      • Average: 4 or more key scenarios, but fewer than 8
      • Complex: 8 or more key scenarios

    Readings about Use-Case Points:

    Automation.
    From a Rosario Team Architect point of view this is an interesting estimation method… because it can be automated! Not that we only should use UCP just because it can be automated, put automation of the estimation process will give us a big benefit in making estimation more mature within the organization.

    Looking at Steve McConnell’s deadly sins we can imaging that one of the important capabilities we need from estimation tooling is historical data, “Providing off-the-cuff estimates”. Without historical data estimation is useless, error prone and unpredictable, with automated estimations this can be realized. Another important pro with automation is that we are reproducible, running the estimation again will result in the same numbers, which give the historical data some more value ;-)

    TFS is a databasesystem so capturing historical data shouldn’t be a big problem.

    boeing
    [Process Improvement Journey (From level 1 to Level 5) The Boeing Company]

    Deadly sins tackled:

    • Saying “yes” when you really mean “no”
    • Not using estimation software
    • Providing off-the-cuff estimates

    One other important advantage you get from automation the estimation process with Rosario and UCP is collaboration, collaboration in the early phase of the project lifecycle between business and development. Estimation is important for the business to get budget and for the project lead for the planning, while working together on the Use-Cases business can immediately see what the impacts are in terms of budget, so there will be less unnecessary and incomplete requirements.

    Drawbacks and points to look at:

    1. Use Case Granularity and Complexity…
    There is no standard in writing Use Cases. You can define very high-level cases and very low-level detailed use-cases, nobody will stop you from doing that. This is one major challenge in estimation with Use-Cases. When writing to high level cases you are in the neighbourhood of the deadly sin “Estimating in the “impossible zone” and “Committing to estimates too early in the cone of uncertainty”.

    Cone01
    [ The Cone of Uncertainty from http://www.construx.com/Page.aspx?hid=1648]

    For sure it’s possible to estimate with use-cases in a more early stage of the project using more higher level use cases, the “initial state”, but keep in mind that the uncertainty will be bigger in this stage. When the project evolves and more information comes available, more detailed use-cases are made, you can tune the estimation and with version control of previous estimations and use-cases you will have an mechanism to assess your previous estimation. With this assessment you can make this process of early estimation in the lifecycle more mature. Actually this learning process must be in place during the whole lifecycle, for estimation and for all the other things. [see the Boeing story]

    This problem of differences in granularity and complexity of use-cases is recognized by the industry and many people and organizations have a solution for it. For example Capgemini use Smart Use Cases which are generic use case types.

    ADP

    I really like this approach of a kind of repository of UseCase stereotypes, although you must be aware that a use case is technology independent adding technology in to use cases will make the world more fuzzier.

    2. Technology/ platform independence…
    Use Case is document which describes “WHAT” our system will do at high-level and from user perspective. Use Case does not capture “HOW” the system will do. It’s impossible to make an “platform independence” estimation. Platforms, technologies, tools and languages all have impact on the speed of development. While Use-Cases, actually UML in its whole, is platform independent, estimation can’t be, so there needs to be a place in the Use-Case-Points methodology for differences in platforms, technologies, tools and languages . With UCP this is minimal done by the identification of actors.

    Actor identification need technical details: In order that the actor is classified we need to know technical details like which protocol the actor will use. So estimation can be done by technical guys.
    [How to Prepare Software Quotation]

    independence-signing
    [ almost 4th of July :-) The signing of the Declaration of Independence 4th July 1776 ]

    Actually, you don’t want technology in Use Cases, it’s mend to be independent so we need to keep it that way. Another way to put technological knowledge in the estimation is add this information to the complete estimation. For example, most organizations already have chosen their platforms, technologies, tools and languages and Enterprise Architecture will monitor it that every project uses their guidelines. So adding a reference to these guidelines while estimating will bring technology in the estimation. Historical data will need to have this information and projects can base their estimation on this, guideline referenced, data. These Enterprise Architectural guidelines can be measured up-front and will get fine-tuned with every project. With that identifying risks when using new technologies in an early stage.

    when capturing this in with automation, you have Deadly sins tackled:

    • Committing to estimates too early in the cone of uncertainty
    • Estimating in the “impossible zone”
    • Overestimating savings from new tools or methods
    • Not including risk impacts in estimates

    Deadly sins not tackled:

    • Confusing targets with estimates [for sure don’t put the estimation in TFs as workitems, maybe something like a workitem where the project lead can upgrade them to workitems. But that’s already very very tricky ]
    • Assuming underestimation has a neutral impact on project results [don’t assume that]
    • Using only one estimation technique [with one technique automated there is time left for the other…]

    Summary.

    So, there has to be many capabilities in place before we can make an mature automated estimation process with Rosario Team Architect.
    But, we can start small… extend the Use-Case Diagram with additional “complexity” information [also granularity information], add an command which captures the Use-Cases and collect the points. The very near next step would be historical data with the possibility of referencing guidelines. In the future we could make a repository of Use-Case stereotypes like the smart use cases from Capgemini.

    [next post an early implementation]

    Posted: Jul 03 2008, 16:30 by clemens | Comments (1) RSS comment feed |
    Filed under: