Mostrando entradas con la etiqueta Computer Graphics. Mostrar todas las entradas
Mostrando entradas con la etiqueta Computer Graphics. Mostrar todas las entradas

Custom MainLoop in C# / UWP applications with DirectX (SharpDX) with unlocked frame rates (beyond 60 Hz)

When you want to write a game or a 3D application in C# using SharpDX, and you want it to be UWP, there is little information about how to correctly handle the basics of the main loop.

All examples I found were based on the interop between XAML and DirectX, but if you go that path, you´ll need to rely on the CompositingTarget.Rendering event to handle your updates and renders, which lets all the inner processing to Windows.UI.XAML and limits the frame rate to 60 Hz.

So, how to create your own, unlocked-framerate-ready, main loop?

First things first:

Dude, where's my Main entry point?

Typically, a C# application has a Program static class where the "main" entry point is defined.

When you create a WPF, UWP or Windows Store application though, aparently that code is missing, and the application is started magically by the system.

The truth is that the main entry point is hidden in automatically-generated source code files under the obj\platform\debug folder. If you look there for a file called App.g.i.cs, you'll find something like this:


#if !DISABLE_XAML_GENERATED_MAIN
    /// <summary>
    /// Program class
    /// </summary>
    public static class Program
    {
        [global::System.CodeDom.Compiler.GeneratedCodeAttribute("Microsoft.Windows.UI.Xaml.Build.Tasks"," 14.0.0.0")]
        [global::System.Diagnostics.DebuggerNonUserCodeAttribute()]
        static void Main(string[] args)
        {
            global::Windows.UI.Xaml.Application.Start((p) => new App());
        }
    }
#endif

Et'voila... There's your entry point.

So first thing is to define the DISABLE_XAML_GENERATED_MAIN conditional compilation symbol to prevent Visual Studio from generating the entry point for you.

Next, is to add your own Program class and main entry points, as always, so you have control on the application start procedure. You can simply copy-paste that code anywhere in your project.

The Main Loop

Please note: this implementation is inspired by the C++ equivalent described here.

Now that you have a main entry point, you can replace the invokation of Windows.UI.Xaml.Application.Start (which internally deals with its own main loop) with your own code.

What we want to use instead is: Windows.ApplicationModel.Core.CoreApplication.Run, which is not bound to XAML and allows you to define your own IFrameworkView class, with your custom Run method.

Something like:


public static class Program
{
    static void Main(string[] args)
    {
        MyApp sample = new MyApp();
        var viewProvider = new ViewProvider(sample);
        Windows.ApplicationModel.Core.CoreApplication.Run(viewProvider);            
    }
}

The ViewProvider class

The main purpose of the view provider is to create a IFrameworkView when required, so it could be something like this:


public class ViewProvider : IFrameworkViewSource
    {
        MyApp mSample;

        public ViewProvider(MyApp pSample)
        {
            mSample = pSample;
        }
        //
        // Summary:
        //     A method that returns a view provider object.
        //
        // Returns:
        //     An object that implements a view provider.
        public IFrameworkView CreateView()
        {
            return new View(mSample);
        }
    }

The View Class

The view class implements the IFrameworkView interface, which defines most of the logic we want to implement.

The framework will invoke interface's methods to perform the initialization, uninitialization, resource loading, etc, and will also report us when the application window changes through the SetWindow method.

For the purpose of this article, the most interesting part is the Run method, which we can write to include our own, shiny, new Main Loop:



public void Run()
{
    var applicationView = ApplicationView.GetForCurrentView();
    applicationView.Title = mApp.Title;

    mApp.Initialize();

    while (!mWindowClosed)
    {
        CoreWindow.GetForCurrentThread().Dispatcher.ProcessEvents(
                                   CoreProcessEventsOption.ProcessAllIfPresent);
        mApp.OnFrameMove();
        mApp.OnRender();
    }
    mApp.Dispose();
}

This removes the need to rely on XAML stuff in your game (and its overhead), gives you more control about how the mainloop behaves, and unleashes the possibility of rendering with a variable frame rate in C#.

Please refer to this page for further info about how to initialize a swap chain suitable for unlocked frame rates.

Hope it helps!

Evolution of visuals in racing games 1992-2014

22 years. That's the time it takes to move from barely recognizable locations to blazing reality.

I still remember when I first saw Geoff Crammond - MicroProse's Formula One Grand Prix. It was mindblowing!! 3D graphics, amazing gameplay... Best racing experience I had so far...

One of my best friends had a 386 PC that met the ultra-high-end requirements the game needed (DOS, 1 MB of Ram and VGA Graphics Card). Every time I had the chance to drop by his house, first thing we did was to play this game.


But things have changed quite a lot in this time. Comparing the Monaco environment from that game with the one from CodeMasters F1 2013 makes a hell of a difference...

[Best viewed in FullHD 1080p]

And things still keep moving forward. This year, we will have a new F1 2014 game, and some others that keep pushing the limits of visual realism in videogames: [All videos best viewed in FullHD 1080p]

Diveclub:


Project Cars Trailer:


Project Cars vs Real Life:


Awesome !!!

Cómo tomar una muestra de color en cualquier cosa mostrada en tu pantalla, con Photoshop

Esta es una funcionalidad de Photoshop que no conocía. Si alguna vez te has preguntado qué color exactamente utiliza un icono, un botón de una aplicación, o cualquier otra cosa que se muestre en tu PC, puedes averiguarlo con Photoshop siguiendo estos sencillos pasos:

1.- Abre una imagen cualquiera, en Photoshop

2.- Selecciona el “EyeDropper” tool, para tomar muestras de color (normalmente dentro de la imagen en la que estás trabajando).

3.- Haz click en cualquier punto de tu imagen y, SIN SOLTAR EL BOTÓN, arrastra el puntero del mouse a cualquier otro punto de tu escritorio, a cualquier otra aplicación, icono o ventana.

Photoshop irá mostrando constantemente el color debajo del cursor del ratón, sin importar si todavía estás en tu imagen o siquiera dentro de Photoshop. Cuando sueltes el botón, ese color es seleccionado como color de trabajo en Photoshop, por lo que ya puedes utilizarlo en tus imágenes o acceder a sus componentes RGB.

Muy útil!!!


J

Fast casting of C# Structs with no unsafe code (but still kind of "unsafe")

C++ allows us to perform any casting between memory pointers. It's basically up to you to ensure the correct types are casted to prevent memory problems.

C# however doesn't allow to do this out of the box, unless you go into using unsafe code and perform the pointer conversion yourself, pretty much like in C++.

Problem is that unsafe code is not supported in all platforms, and generally it's a good idea to avoid using it as long as you can.

So, imagine we have two structs like this:


    public struct STA
    {
        public int CustomerID;
        public float CustomerRate;
    }
    public struct STB
    {
        public int CustomerID;
        public float CustomerRate;
    }

One of them is yours, and the other one comes from an APIs or legacy software you don't have access to. Now, imagine you need to convert one into another. How would you face that?

Obviously, if you try to simply assign them, it just won't work:



Of course, the most evident (ans safest) solution is to create a new struct of the type STB and copy the contents from STA to STB:

struct_a = new STA(struct_b.CustomerID, struct_b.CustomerRate);

The drawback is that this approach is slow and implies a memory overhead, what might not be an option sometimes.

If performance is a critical issue, you are sure that both structs are 100% compatible and share the exact same memory layout, and that both come from compatible platforms... Why not fooling the compiler and make it just assume that they are compatible types? 

As we mentioned, in unsafe C# code this can be simply achieved by casting pointers, just like in C++. But if you mark your C# code as unsafe, it can be rejected in some platforms. Is there a way to do that without using unsafe code? Yes, there is.

C# StructLayout to the rescue

Perfectly safe C# code allows you to explicitly define the offset of struct members, using attributes from System.Runtime.InteropServices, just like this:


    [StructLayout(LayoutKind.Explicit)]
    public struct STA
    {
        [FieldOffset(0)]
        public int CustomerID;
        [FieldOffset(4)]
        public float CustomerRate;
    }

This allows you to do tricky things like settings two different members of the struct at the same offset, creating something similar to C++ Unions:


    [StructLayout(LayoutKind.Explicit)]
    public struct Union
    {
        [FieldOffset(0)]
        public STA StructA;
        [FieldOffset(0)]
        public STB StructB;
    }

Note that both StructA and StructB are at the same field offset, and therefore will occupy the exact same location in memory. As both share the same memory layout, the result is that you have ONE single object in memory, and two different references (kind of pointers) to them, each one using a different type. 

Now, we can do the following:


            STA struct_a;
            STB struct_b;
            ...
            Union stu = new Union();
            stu.StructB = struct_b;
            struct_a = stu.StructA;

As you can see, no new STA has been created in memory, and we have saved all the process of copying data from one struct to another.

However, please be aware that this is kind of cheating... You are fooling the compiler to accept that, but in practice you are performing a classical pointer conversion, even if you are using purely safe code.

PLEASE BE AWARE that this approach doesn't take into account endianness. Different platforms, with different byte endianness, may store bytes in the opposite way. For example, if STA comes from a big-endian platform, and STB works in a little-endian platform (or just the opposite), bytes will be reversed when doing this operation. It doesn't take into account differences in data types either, so you must be very careful to ensure that all types have the same size in one struct and the other.

So, remember:
if(same endiannes & same data types) 
                              you are good to go !

Functional improvements

The Union struct we have created can be made much more comfortable to use if you add operators to it.

For example, comparison operators like this:

 public static bool operator ==(STA left, Union right)
        {
            return left == right.StructA;
        }
        public static bool operator ==(STB left, Union right)
        {
            return left == right.StructB;
        }

Will allow you to simply compare Unions with the original types:

if(union == struct_a)

And even more comfortable, adding implicit operators like this:

        public static implicit operator Union(STA value)
        {
            Union ret = new Union();
            ret.StructA = value;
            return ret;
        }

Will allow you to simply assign one type to the other like this:

            STA struct_a;
            ...
            Union union = struct_a;

Memory footprint improvements

One small drawback of this approach is the need to create structs of the type Union, each time you want to perform a conversion of this kind. A simple solution is to perform the operation in a static Union object. It's a bit messy, but it works. For instance, if you declare the class like this:

    [StructLayout(LayoutKind.Explicit)]
    public struct Union
    {
        [FieldOffset(0)]
        public STA StructA;
        [FieldOffset(0)]
        public STB StructB;

        public static Union StaticRef = new Union();

        public static STA ToSTA(STB pStructB)
        {
            StaticRef.StructB = pStructB;
            return StaticRef.StructA;
        }
        public static STB ToSTB(STA pStructA)
        {
            StaticRef.StructA = pStructA;
            return StaticRef.StructB;
        }
    }

You can now re-use the same static object over and over again, doing things like:

            STA struct_a;
            STB struct_b;
            ...
            struct_a = Union.ToSTA(struct_b);

Hope it helps!! Cheers...

DirectX Control Panel and D3D Debug Output in D3D 9.x/10.x/11.x for Windows 7, 8 and 8.1

Debugging D3D applications can be a pain, but it´s completely necessary sometimes if you want to know what´s going on in your D3D application (error codes don´t give much information without the debug output).
However, things have changed quite a bit recently in the latest versions of Windows (8.1), Visual Studio (2013) and DirectX (11.2). The following video explains some of the changes related to D3D Debugging, the DirectX Control Panel, and how all the new infrastructure works:

You can also access the content in the form of slides.
Keep in mind that some of the DirectX features are no longer distributed with the DirectX SDK, but with the Windows SDK. So, we will try to cover all the possible cases you could face when trying to activate the Debug Output in D3D, no matter if you work in Windows 7 with the old version of DirectX SDK (June 2010), if you are in Windows 7 or Windows 8 and use the new Windows SDK, or if you are in the latest Windows 8.1 with its own Windows SDK.

The New DirectX Control Panel

We will need to deal with it to enable D3D debug and to manage other stuff, so first thing is to learn to differentiate between the old one (June 2010 DirectX SDK) and the new ones (Windows SDK). It´s easy: the new ones only include one tab (Direct3D 10.x/11.x):
Old Control Panel (DirectX SDK June 2010)
New DX Control Panel (Windows SDK)
image image
Location:
C:\Program Files (x86)\Microsoft DirectX SDK (June 2010)\Utilities\bin\x64 (or x86)
Location:
C:\Windows\System32

So, if you are developing for D3D 10.x or 11.x, use the new one as the old one won´t have any effect. If you are still using D3D9 and the old DX SDK 2010, grab the one on your left.
Note: See the above video to learn about new features in the panel like the “Feature level limit”.

Windows 7

D3D 9.x

If you are still developing with D3D9, honestly you should seriously consider moving forward. But if you can´t, and you need to enable debug in your app, you just need to use the OLD Control Panel described above, and navigate to the Direct3D 9 tab to make sure you select “Use Debug Version of Direct3D 9”, and turn the Debug Output Level to “More”, just like depicted in the following image:
image
That should force your DirectX applications to use the Debug version of the DirectX libraries, so you should immediately start to see debug output in Visual Studio.

Managed D3D9 applications (SlimDX, SharpDX and similar wrappers)

If you are developing in C#, keep in mind that you will also need to activate the flag “Enable native code debugging” under the Debug tab of your main project properties in Visual Studio. If not, the native debug output cannot get through to the output window.
image

D3D 10.x / 11.x

Important None: The necessary components for debugging D3D 10.x and 11.x are no longer installed with the old DirectX SDK (June 2010). In order to have them you need to install the Windows 8 SDK (even if you are in Win7). If you don´t have the necessary components, the creation of the device with the "debug" flag will fail (see below for more info). One easy way to check if you have the components is to check the existance of the NEW DX Control Panel, in C:\Windows\System32.

Activating the debug output in D3D 10.x / 11.x is a bit different, as settings are handled per application (you need to add your exe to a list in the control panel, and set an specific configuration for it in there). To do so, please follow these steps:
  1. 1.- Open the NEW DirectX Control Panel and navigate to the Direct3D 10.x / 11 tab
  2. 2.- Click on “Edit List” to add your exe to the list of applications controlled by the DX panel
  3. 3.- In the window that will pop up (below), click on the dots “…” and navigate to your exe file. Then click “Ok”.
image
  1. 4.- Back in the main tab, choose the configuration you want (probably want to set “Force On” to force debug output), and mute all the message types you don´t want to see (if any)
Once your exe is on the list of apps the Control Panel manages, next step is to make sure your D3D device connects to the Debug Layer of DirectX.
You can find more info here, but basically what you need to do is create your Device with Creation Flags including the D3D11_CREATE_DEVICE_DEBUG flag.

Managed D3D 10.x /11.x applications (SlimDX, SharpDX and similar wrappers)

Just like with D3D 9, when developing in C# you should remember to activate the flag “Enable native code debugging” under the Debug tab of your main project properties in Visual Studio. If not, the native debug output cannot get through to the output window (see above in this post for more info).

Windows 8.x + Windows SDK

This part covers the case when working in Windows 8.x with the newer versions of the Windows SDK.

D3D 9.x

Debugging D3D 9 applications in Windows 8 should work exactly the same as we did in Windows 7. Of course, the new Windows SDK doesn’t include tools to configure D3D9, so you should install the June 2010 DX SDK to get access to the OLD control panel. I couldn’t make sure this works as all my machines are updated to Windows 8.1, so any feedback here will be really welcome.
What I can tell you is that, unfortunately, D3D9 debugging seems to be disabled in Windows 8.1. If you open the OLD DX Control Panel, you will see that all the debug parts of the D3D 9 tab are grayed out. I tried by all means to bring it back with no luck, so if you manage to enable it, please let me know.

D3D 10.x / 11.x

Enabling debug output for D3D 10.x and 11.x is pretty much the same as in the case of Windows 7, unless this time you will need to use the NEW version of the DX Control Panel, located in C:\Windows\System32 instead of the usual DXSDK folders.
Also, remember to create your devices specifying the D3D11_CREATE_DEVICE_DEBUG creation flag (as described above), and in the case of developing in C#, remember to activate the “Enable native code debugging” option in your main project.

Troubleshooting

  • The application works but I get no debug output: If you are in D3D9, make sure you activated the Debug libraries in the old DX Control Panel. Also, if you work in C#, ensure to activate the “Enable native code debugging” option. If you work in D3D 10/11, make sure you created the device with the D3D11_CREATE_DEVICE_DEBUG flag, and don´t forget to add your app to the list of programs managed by the DX Control Panel. In all cases, always use the appropriate DX Control Panel (see above to learn about this).
  • In D3D 10.x / 11.x, the application fails while trying to create the device with the DEBUG creation flag: This usually happens if you don´t have the correct SDK installed. If you are in Windows 7 or in Windows 8, make sure you install the Windows 8 SDK. If you are in the latest Windows 8.1 you should install its own Windows 8.1 SDK, as it´s not compatible with the 8.0 SDK version. One easy way to check if you have the components is to check the existance of the NEW DX Control Panel, in C:\Windows\System32.

¿Demasiados efectos especiales?

Ayer estuve viendo Iron Man 3. Psé… salvo los chistes de Robert Downey Jr. (con los que me reí mucho), la película en sí es flojilla… Y es que quizá sea que me hago mayor, pero la verdad, hace ya tiempo que me cansa la tendencia que llevamos soportando desde hace años: más efectos especiales, más espectacularidad, más acción, más de todo...

Los efectos especiales han experimentado varias evoluciones, o varias generaciones: al principio, apenas añadían cuatro cositas al metraje grabado con cámara real, pero a medida que la tecnología avanza, el protagonismo en pantalla de los FX ha ido creciendo notablemente (todos recordamos la re-edición que George Lucas hizo de las tres primeras películas de Star Wars, para añadirles MAS efectos especiales).

Un ejemplo: los efectos especiales de Highlander (Los Inmortales), de 1986:

 

Con el tiempo, hemos visto como los contenidos generados por ordenador (CG, o CGI) han ido reemplazando cada vez más cosas en pantalla, llegando incluso a rodar con los actores frente a paneles verdes o azules (con todo el entorno reemplazado por ordenador), o incluso algunas intentonas (no muy exitosas) de introducir actores CG en primeros planos de películas (recuerdo ahora el T-800 con aspecto de Arnold Schwarzenegger de Terminator Salvation, introducido con CG porque Arnie había dado el salto a la política):

 

Con cada una de estas "evoluciones", se vive una inevitable obsesión por utilizar la nueva tecnología hasta la saciedad. Es una especie de competición de "y yo más", que normalmente hace que la historia y el guión pierdan protagonismo, y por lo tanto, las películas rodadas en esos momentos dejen bastante que desear. Las nuevas películas de Batman, la propia Terminator Salvation, y algunos otros ejemplos más, abrían un hueco a la esperanza… Son ejemplos a seguir, en los que los efectos especiales están al servicio de la historia, y no al revés.

Pero ahora nos toca vivir una nueva generación o evolución de Fx: minutos enteros de metraje de película que son enteramente CG (generados por computador), en los que no hay ni un solo elemento real. Escenarios, entornos, actores, todo es CG.

Y es que éste es un cambio notable… Prescindir de actores, escenarios y cámaras reales otorga una libertad sin precedentes (sin mencionar la reducción de costes), permitiendo planos de cámara imposibles y escenas totalmente inviables con técnicas tradicionales. Spiderman fue una de las primeras películas que empezó a utilizar estas técnicas con cierto éxito, y sin que el espectador medio lo notara (increíbles travelings de cámara alrededor de Nueva York, con el protagonista colgándose por los edificios). Los Vengadores ya incluyó bastantes planos en los que los actores con CG (siempre que estén lejos, o se muevan muy rápido para que el motion blur disimule lo suficiente).

Así que, como siempre, antes de que esta nueva obsesión se enfríe y estas técnicas queden al servicio de la película, el guión y la historia (como debe ser), me temo que tendremos que vivir otra nueva escalada de "a ver quién la tiene más grande".

Solo hace falta ver el trailer de Pacific Rim...

Realtime, screen-space local reflections, using C# and SharpDX

The following video shows my own implementation of the technique "Real Time Local Reflections (RLR)" used by Crytek in CryEngine3, and described here.

This particular implementation works with a non-deferred rendering system, and it’s adapted to work particularly well with planar surfaces like roads (which is what we most use it for, here at Simax).

The process is basically doing a texture lookup for the reflections as usual, but instead of using a cubemap, we use a simple texture (a copy of the previous back-buffer). It also needs a copy of the previous frame's depth buffer, to do a raymarch looking for the appropriate sample. The steps are the following:

  1. 1.- Start from the screen position of the pixel you are shading
  2. 2.- Move along the direction of the reflected (and projected to screen space) normal
  3. 3.- At each step, take a sample of the depth buffer, and look for a hit. If found, use the sample of the backbuffer at the same offset. If not, move one step forward until you are out of the texture bounds

Cons

It has a lot of downsides, as the amount of information present on a single texture is very limited. One key aspect is to fade out when you are reaching the limits of the backbuffer and when the reflection vector is facing the viewer (and therefore doesn´t hit the backbuffer). That way, you avoid hard edges in the reflection.

Another limitation is its compatibility with multisampling. The problem is that you need a copy of depth buffer, and if it's multisampled, you need to resolve it to a single sampled resource. Resolving the depth buffer from a multisample resource is not a trivial task, and in DX10 only graphics cards, it seems to be not possible (beside from doing it manually).

The method: ResolveSubResource does a good job with back-buffers, but it doesn´t work with depth-buffers (I haven´t tried in DX11 yet). Another option is to move to DX 10.1 and pass the depth buffer to the shader as a multi-sampled resource, using the Texture2DMS type introduced in DX 10.1. It allows to pass multi-sampled resources to shaders, so the resolving can be done in the shader.

Pros

The major advantage of this method is speed. By grabbing only the previous backbuffer, you can add reflections to almost any object in your scene. Of course, the shader used to draw is slower than a simple one, but nothing compared with the cost of rendering multiple cube-maps or other methods...

Also, despite its cons, it does a pretty convincing job in certain cases. Wet roads, water shaders and such stuff is a perfect case for it, as when you are driving in a simulator, the angle of incidence on the road, and therefore the reflection vector fit well with the back-buffer projection.

Another implementation of the technique can be found here. I haven´t tried it, but it seems to work too…

Cheers !

Projecting a 3D Vector to 2D screen space, with automatic viewport clipping (DirectX, SlimDX or XNA)

Many times, you will need to know the 2D screen coordinates of a 3D world position. DirectX already includes methods to perform vector projections, taking into account the needed World, View and Projection matrices, as well as the viewport scaling. It does not include however viewport clipping, as an additional feature in those methods.

Viewport clipping can be a tricky matter, and sometimes, you will need to rely on algorithms like the Sutherland-Hodgman algorithm, or the refined version specifically developed for 2D viewports: the Cohen-Sutherland algorithm. Those methods are especially appropriate when you are already dealing with 2D coordinates, or if you need to know the extra points or polygons generated when clipping is performed.

In our case however, we will only focus on finding the closest in-screen coordinates that correspond to an off-screen point, without dealing with any extra geometry or polygon sub-division. It’s important to note also that we will be working with 3D coordinates that go through a projection process (and finally getting 2D coords). This is relevant, as provides us with additional information we can use, and allows us to jump inside the algorithm and perform the clipping in the middle of the projection pipeline, instead of doing so at the end, when the coordinates are already 2D.

Resources like this, and this explain very well the processing of vertices in the Direct3D pipeline:

untitled

As you can see, each 3D position travels through different stages and spaces of coordinates: model space –> world space -> camera space –> projection space –> clipping space –> homogeneous space –> and finally: Screen Space.

Evidently, D3D also performs certain types of clipping to vectors, and you can tell by the above picture that clipping is done, (surprisingly), in clip space. We will try to mimic that behavior…

Note: Transforming coordinates with the MClip matrix, to go from projection space to clip space should be done only if you want to scale or shift your clipping volume. If you are ok with a clipping volume that matches your screen render target viewport (you will, most of the cases), you should leave this matrix as the Identity, or simply don´t perform this step. The below written algorithm has all this step commented.

Once our coordinates are in Clip Space (Xp, Yp, Zp, Wp), we easily perform the clipping by limiting their values to the range: –Wp .. Wp for the X and Y, and to the range: 0 .. Wp for Z.

After that, we just need to proceed with the normal Vector projection algorithm, as the resulting 2D coordinates will be stuck inside the screen viewport. An extra feature that should be nice to have, is a simple output variable that tells us if the coordinates were inside or outside the viewport.

A C# implementation of such an algorithm could be:

public static Vector2 ProjectAndClipToViewport(Vector3 pVector, float pX, float pY,
                                float pWidth, float pHeight, float pMinZ, float pMaxZ,
                                Matrix pWorldViewProjection, out bool pWasInsideScreen)
        {
            // First, multiply by worldViewProj, to get the coordinates in projection space
            Vector4 vProjected = Vector4.Zero;
            Vector4.Transform(ref pVector, ref pWorldViewProjection, out vProjected);

            // Secondly (OPTIONAL STEP), multiply by the clipMatrix, if you want to scale
            // or shift the clip volume. If not (most of the times you won´t), just leave 
            // this part commented,

            // or set an Identity Matrix as the clip matrix. The default clip volume parameters
            // (see below), will produce an identity clip matrix.

            //float clipWidth = 2;
            //float clipHeight = 2;
            //float clipX = -1;
            //float clipY = 1;
            //float clipMinZ = 0;
            //float clipMaxZ = 1;
            //Matrix mclip = new Matrix();
            //mclip.M11 = 2f / clipWidth;
            //mclip.M12 = 0f;
            //mclip.M13 = 0f;
            //mclip.M14 = 0f;
            //mclip.M21 = 0f;
            //mclip.M22 = 2f / clipHeight;
            //mclip.M23 = 0f;
            //mclip.M24 = 0f;
            //mclip.M31 = 0f;
            //mclip.M32 = 0;
            //mclip.M33 = 1f / (clipMaxZ - clipMinZ);
            //mclip.M34 = 0f;
            //mclip.M41 = -1 -2 * (clipX / clipWidth);
            //mclip.M42 = 1 - 2 * (clipY / clipHeight);
            //mclip.M43 = -clipMinZ / (clipMaxZ - clipMinZ);
            //mclip.M44 = 1f;
            //vProjected = Vector4.Transform(vProjected, mclip);
            
            // Third: Once we have coordinates in clip space, perform the clipping,
            // to leave the coordinates inside the screen. The clip volume is defined by:

            //
            //  -Wp < Xp <= Wp
            //  -Wp < Yp <= Wp
            //  0 < Zp <= Wp
            //
            // If any clipping is needed, then the point was out of the screen.
            pWasInsideScreen = true;
            if (vProjected.X < -vProjected.W)
            {
                vProjected.X = -vProjected.W;
                pWasInsideScreen = false;
            }
            if (vProjected.X > vProjected.W)
            {
                vProjected.X = vProjected.W;
                pWasInsideScreen = false;
            }
            if (vProjected.Y < -vProjected.W)
            {
                vProjected.Y = -vProjected.W;
                pWasInsideScreen = false;
            }
            if (vProjected.Y > vProjected.W)
            {
                vProjected.Y = vProjected.W;
                pWasInsideScreen = false;
            }
            if (vProjected.Z < 0)
            {
                vProjected.Z = 0;
                pWasInsideScreen = false;
            }
            if (vProjected.Z > vProjected.W)
            {
                vProjected.Z = vProjected.W;
                pWasInsideScreen = false;
            }

            // Fourth step: Divide by w, to move from homogeneous coordinates to 3D
            // coordinates again

            vProjected.X = vProjected.X / vProjected.W;
            vProjected.Y = vProjected.Y / vProjected.W;
            vProjected.Z = vProjected.Z / vProjected.W;

            // Last step: Perform the viewport scaling, to get the appropiate coordinates
            // inside the viewport

            vProjected.X = ((float)(((vProjected.X + 1.0) * 0.5) * pWidth)) + pX;
            vProjected.Y = ((float)(((1.0 - vProjected.Y) * 0.5) * pHeight)) + pY;
            vProjected.Z = (vProjected.Z * (pMaxZ - pMinZ)) + pMinZ;

            // Return pixel coordinates as 2D (change this to 3D if you need Z)
            return new Vector2(vProjected.X, vProjected.Y);
        }

Hope it helps !

Sonrisa