<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[db-in]]></title><description><![CDATA[iOS Development, Swift and Objective-C]]></description><link>http://blog.db-in.com</link><generator>NodeJS RSS Module</generator><lastBuildDate>Thu, 17 Dec 2020 22:44:41 GMT</lastBuildDate><atom:link href="http://blog.db-in.com/rss/" rel="self" type="application/rss+xml"/><author><![CDATA[Diney Bomfim]]></author><ttl>60</ttl><item><title><![CDATA[The Chicken Problem (Protocol Oriented Programming)]]></title><description><![CDATA[<p>Hello folks,</p>

<p>On this article, let's talk about the Protocol Extensions and Protocol Oriented Programming (POP). There are lots of discussions on this subject, many comparisons between Swift Protocol Extensions and Multiple Inheritance. So let's discuss the difference between both and why POP sounds much more reliable.</p>

<p><img src='https://dl.dropboxusercontent.com/u/6730064/chicken-run.jpg' /></p>

<p><a name="list_contents"></a>  </p>

<table width="675">  
<tr>  
<th colspan=2>List of Contents</th>  
</tr>  
<tr><td valign="top">  
<ul>  
    <li><a href="#chicken">Chicken Problem</a></li>
    <li><a href="#oop">OOP & Multiple Inheritance</a></li>
    <li><a href="#extension">Swift - Protocol Extensions</a></li>
    <li><a href="#protocol">Protocol Oriented Programming (POP)</a></li>
    <li><a href="#conclusion">Conclusion</a></li>
</ul>  
</td></tr>  
</table>

<p><br/><a name="chicken"></a> <br />
<h2><strong>The Chicken Problem</strong></h2><a href="#list_contents">top</a></p>

<p>I like to use this example to illustrate how complex a small problem can become into a large scale development and a real world application. Let's first see the problem and then discuss it.</p>

<p>Imagine an application that we need to construct "objects/entities" of birds. At first glance, we'll have Perrots, Sparrows and Falcons. Let's now create a beautiful and smart code for that.</p>

<p><br/><a name="oop"></a> <br />
<h2><strong>OOP &amp; Multiple Inheritance</strong></h2><a href="#list_contents">top</a></p>

<p>All those birds share many things, like a "beak", "feathers" and all of them can "fly". So let's think in the following architecture for our Birds App.</p>

<p><img src='https://dl.dropboxusercontent.com/u/6730064/chicken_problem1.jpg' /></p>

<p>Looks Great! At this point, we as developers are super proud of our creation. It's clean, reliable, we made the code once, only one implementation of the "fly" method, which is a very complex algorithm.</p>

<p>Now, we made the <b>Release 1</b> of the product. At this point the client (owner of the Bird Application) says: "Hey, you know what, I noticed that my customers would like to have a Chicken as well, so please add Chickens to this app, it should be very easy once you told me you have a very flexible architecture working with OOP"</p>

<p>Ok, that's easy... Chickens are birds, so it'll become one subclass of Birds... But wait... Chickens can't fly! So at this point do I need to override this "fly()" method and just negate all my beautiful algorithm from "Bird"?</p>

<p><img src='https://dl.dropboxusercontent.com/u/6730064/chicken_problem2.jpg' /></p>

<p>Wait, by doing this I'm not just rewriting code, however if my beloved client decides that tomorrow he'll invent a new kind of super Chicken that can fly, what am I going to do? A new sub-class of "Chicken" negating my Chicken negation to "fly()"?</p>

<p>And if my client decides to move the app to the South pole and create Penguins? Penguins have no feather!</p>

<p><img src='https://dl.dropboxusercontent.com/u/6730064/chicken_problem3.jpg' /></p>

<p>OMG... I just realized that my whole Architecture is not flexible enough for my client! Should I change it? Should I rewrite my whole application? Propose a V2? Should I change my client? Should I change to another planet where Birds are perfect and they all share the same characteristics?</p>

<p><img src='http://ara-vision.com/gif-library/angry/stick-fuuuu.gif' /></p>

<p>Ok, ok... let's calm down. Let's try to think about possible solutions, after all, OOP is great and really can reflect the complexity of the real world.</p>

<p>Well, not exactly. We can try to struggle ourselves with the OOP solutions. I'll not get into deep on how many fail solutions we could try:  </p>

<ul>  
  <li>Create two base classes: Bird and Flying Bird (FAIL: What if client decides to have Airplanes?)</li>
  <li>Use Protocols/Interfaces to define the Flying Method (FAIL: Re-writing a lot of code)</li>
  <li>Define "fly()" as a static method that receives an object that can fly (FAIL: What if I have millions of flying objects? Concurrency, multi-threading)</li>
</ul>

<p>I've made this exercise with hundreds of different developers, letting them try many solutions. In short, the only possible solution to get out this problem in OOP is using Multiple Inheritance, however as we all know, it's hell dangerous. Only a few programming languages allow such feature and with many caveats in its own implementation. Pearl and Python use an ordered list for Multiple Inheritance, Java 8 tries to use the compiler to avoid errors, C++ is actually one of the only languages that really implements Multiple Inheritance in its full extension.</p>

<p>So, if you don't want to create a C++ module in your application just to solve that problem, let's try now using the Swift features solve that Chicken Problem.</p>

<p><br/><a name="extension"></a> <br />
<h2><strong>Swift - Protocol Extensions</strong></h2><a href="#list_contents">top</a></p>

<p>In short, The Chicken Problem is just a metaphor to a very common issue on software projects when two or more "objects/entities" share some common characteristics but completely differ in others. Especially in the current scenario for mobile projects, working with Agile, getting a constant feedback from the users, clients change its applications every day, which means that it can happen a lot during the maintenance of a product. So as developers, we need to construct architectures flexible enough to handle such changes.</p>

<p>At this point, using a good platform or programming language can help a lot to get out of those traps. At that point, Swift comes to the table with its Protocol Extensions feature. In many ways, it is compared to Multiple Inheritance, however without all the dangerous of Multiple Inheritance. Due to a single detail in this Swift feature, called Default Implementation. It means we can provide a default implementation for protocol properties and methods.</p>

<p><pre><code>
protocol MyProtocol {
    func methodOnProtocol()
}
extension MyProtocol {
    func methodOnProtocol() {
        print("Default Implementation of \(#function)")
    }
    func newMethodOnExtension() {
        print("Default Implementation of \(#function)")
    }
}
</pre></code></p>

<p>Knowing that, we can get back to the "Chicken Problem" and rewrite it, into a way that we code ONLY ONCE and still satisfying all the specific needs of Birds.</p>

<p><img src='https://dl.dropboxusercontent.com/u/6730064/chicken_problem4.jpg' /></p>

<p>Look how flexible this can be. The Bird can still be a super class, no worries. But "fly()" and "feathers" become a Protocol with Default Implementations. Which means we CODE ONCE and we can make all kind of birds without rewriting any code.</p>

<p>Notice that even if our client goes crazy and says: "You know what, my Birds application know needs Airplanes, Kites and all kind of flying objects!". No worries, we built our architecture flexible enough to deal with that, because now instead of working with a single hierarchy chain in the polymorphism approach (more specifically Subtyping Polymorphism) we are now free to think as a child using Lego blocks (technically called Composition).</p>

<p><br/><a name="protocol"></a> <br />
<h2><strong>Protocol Oriented Programming (POP)</strong></h2><a href="#list_contents">top</a></p>

<p>The Protocol Extension is something new, very powerful, we don't know to fully use it yet and yes, it has some caveats involved. We must practice it a lot in order to dominate this technique. It looks like Multiple Inheritance in some aspects but more sophisticated.</p>

<p>One of the best WWDC's video ever is the Crusty video, where Apple explain what are Protocol Extensions and Protocol Oriented Programming. I highly recommend watching this video: <br />
<a href='https://developer.apple.com/videos/play/wwdc2015/408/' >https://developer.apple.com/videos/play/wwdc2015/408/</a></p>

<p>Chaing our mindset from OOP to POP requires a "leap of faith", we probably will find ourselves trying to solve problems using the OOP yet, take a deep breath, try to think as POP since the very beginning, try creating the architecture using POP before start coding. </p>

<p><br/><a name="conclusion"></a> <br />
<h2><strong>Conclusion</strong></h2><a href="#list_contents">top</a></p>

<p>Chickens can't fly, so the "Chicken Problem" stands for a common problem in software development, where objects/entities start sharing common features, however at some point they start diverging, becoming a completely different entity.</p>

<p>At this point, the traditional Polymorphism and it's single inheritance chain can't help. So at this point we need to be smart and using features such as Protocol Extensions and changing our mindset to Protocol Oriented Programming can help a lot to create a more flexible architecture and avoid refactoring whole applications just because the Client has been adding unexpected features.</p>

<p>That's the way the Agile world is today, we must adapt and change fast, especially in the Mobile Software Development.</p>

<p>Thanks for reading guys, <br />
Please post your thoughts and comments bellow.</p>

<iframe scrolling="no" src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  width="100%" height="130px"></iframe>]]></description><link>http://blog.db-in.com/the-chicken-problem-protocol-oriented-programming/</link><guid isPermaLink="false">3a218b06-8eaf-4b9a-8e00-b4b4a2a2d10d</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Sun, 12 Jun 2016 16:45:57 GMT</pubDate></item><item><title><![CDATA[Swift 2.0 is now better than Objective-C]]></title><description><![CDATA[<p>Hello folks,</p>

<p>I'll start with a polemic phrase: "Swift is the future". I'm saying this phrase too. In this article I'll tell a little about why I take my hat off to Swift now. I'll cover few key points on Swift 2.0, like LLVM optimization, Generics Specialization, Value Syntax and Protocol Oriented Programming.</p>

<p><a name="list_contents"></a>  </p>

<table width="675">  
<tr>  
<th colspan=2>List of Contents to this Tutorial</th>  
</tr>  
<tr><td valign="top">  
<ul>  
    <li><a href="#compiler">It's all up to the compiler</a></li>
    <li><a href="#llvm">LLVM and Chris Lattner</a></li>
    <li><a href="#generics">Generics</a></li>
    <li><a href="#value">Value Syntax</a></li>
    <li><a href="#protocol">Protocol Oriented</a></li>
    <li><a href="#conclusion">Conclusion</a></li>
</ul>  
</td></tr>  
</table>

<p><br/><a name="compiler"></a> <br />
<h2><strong>It's all up to the compiler</strong></h2><a href="#list_contents">top</a></p>

<p>Yeah, It's truth that nothing on this old binary world can be faster than C. This statement remains, however... Swift is now running faster on runtime, why?</p>

<p>Not exactly faster than C, but faster than any application that we could create with pure C on our daily works. With tons of pure C code, hyper-optimized routines, we could create an application running so fast as Swift now, but it can become a monster, with a code impossible to manage across a large team of developers, with Seniors and Juniors in the same space.</p>

<p>This is the point when we all think the same:"well, to let every ordinary developer make the same rich code, we could have a compiler or framework good enough to create a hyper-optimized low level code and give an abstract high level that let all developers interact and create the state of art in terms of software performance while keeping the code readable to a large developer team". Well, this is the base idea behind Swift compiler.</p>

<p>It's all about the compiler... really. As we all know, the LLVM compiles the C and Objective-C code into a low-level Assembly that is then assembled to a machine code. LLVM do the same with Swift code, it compiles an assembly code in the last human readable phase of the whole compilation. At this point, the things become really interesting.</p>

<p>All the hard work necessary to create a super optimized code with C is now done by the compiler, without pain. The LLVM can infer so many things based on Swift code that would be insane to recreate the same performance in an ordinary project. But remember, this is a reality in Swift 2.0 with the implementation of all its new great features. Let's see more the new features that give Swift 2.0 such power.</p>

<p><br/><a name="llvm"></a> <br />
<h2><strong>LLVM and Chris Lattner</strong></h2><a href="#list_contents">top</a></p>

<p>Chris Lattner was a promising compiler engineer since his early days at the college. Apple hired him at 2005 to work at LLVM compiler and at 2011 gave him a great opportunity: creating a new high-level language that could just work greatly with all his genius ideas on LLVM. With other few developers at Apple, Lattner started the Swift language that year.</p>

<p>Lattner arrived at Apple with high moral and a great responsibility. He corresponded really fast, taking LLVM and Clang to a higher level and he created language features like the C Blocks, a super optimized piece of code that can run extremely fast at runtime due to its treats with Stack memory. If you remember from iOS 4, the Blocks is which makes the GCD possible.</p>

<p>Lattner also contributed creating the ARC feature for Objective-C. As you can see, he is the responsible for the last 5 years of the most advanced features in the Apple development. About Swift, it is the biggest dream of Lattner. A curious thing: he started working on Swift at nights, at his home. For one and a half year he worked on Swift in secret, without telling anything at Apple, not even to his closest friends. Only on 2011 Chris revealed his secret to the top executives at Apple. They loved!</p>

<p>Apple designated few other developers to work on the Swift project and after another one and a half year, the Swift project became the greatest focus of Apple. Can you imagine what was that Lattner's dream? "Create a language that would soon change the world of computing", in the words of Wired magazine.</p>

<p>Just as the creators of C (Dennis Ritchie and Kenneth Thompson) changed the world of computing once, Chris Lattner is doing it again, at this time using the compiler's power!</p>

<p>Now the Apple part in this... I'm suspect to say this because I really love Apple, but let's just take a look how the Apple Marketing is great. The Google language GO appeared for the first time in 2009 and 6 years later and it doesn't even appear in the Top 50 at Tiobe Index nor in the communities. Swift is 1-year-old and it's reaching Top 10 at Tiobe (while Objective-C is in free fall from Top 5). <br />
<a href='http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html'  target="_blank">Tiobe Link</a></p>

<p>Well, of course there is a big difference between Go and Swift, Go also reach higher ranks at Tiobe during its launch, but come on... The Go was made by one of the creators of C and the Google is behind it, a successful combination that doesn't looks to be working that good. I dare to say that Swift is much more likely to change the "world of computing" that any other language today.</p>

<p>And now, Swift is Open Source, just as Mike Ash wished last year. ;)</p>

<p><br/><a name="generics"></a> <br />
<h2><strong>Generics</strong></h2><a href="#list_contents">top</a></p>

<p>One of the things I love most in Swift is Generics! How powerful this is for the developers. However for the Runtime the Generics was a time bomb.</p>

<p>In order to run reliable codes, the compiler should add a lot of checks and double checks to work with a Generic instruction. But no more in Swift 2.0. Again, "power to the compiler!" The LLVM can now infer much more things on our codes, not just looking at one file but looking at the whole project. By doing this, the compiler can create specific versions of our Generics, this is called <b>Generic Specialization</b>.  </p>

<pre><code>
// Consider a Generic function
func maxFunc<T: Comparable>(x: T, _ y: T) -> T {
    if x < y {
        return y
    }
    return x
}

...

// In another file
var a = 4
var b:Int = 2 + 3
var c = maxFunc(a, b)
</code></pre>

<p>Now with Swift 2.0 the compiler can infer that the above code can be replaced by:  </p>

<pre><code>
// Version generated by Generic Specialization
func maxFunc<Int: Comparable>(x: Int, _ y: Int) -> Int {
    if x < y {
        return y
    }
    return x
}
</code></pre>

<p>With this optimized version, the compiler can guarantee a much higher performance during the RunTime.</p>

<p>This is the same concept applied many years ago to the duality of <code>isEqual:</code> and <code>isEqualToString:</code>. Both do the same job, you can use both on a <code>NSString</code> instance, but the second one can run faster, because it's specialized it can avoid uncessary checks.</p>

<p><br/><a name="value"></a> <br />
<h2><strong>Value Syntax</strong></h2><a href="#list_contents">top</a></p>

<p>This one already exists in Swift 1.x, but now it's much better. The <b>Value Syntax</b> is the opposition to the <b>Reference Syntax</b> that is the one used for C pointers and consequently to Objective-C classes. The C <code>struct</code> and C basic data types use the Value Syntax, so nothing new here.</p>

<p>Just to remember, here is an example of both syntax: <br />
(Objective-C code)</p>

<pre><code>
// Value Syntax, memory is copied.
CGRect rect = {0.0, 0.0, 100.0, 100.0};
CGRect rectVS = rect;

rect.size.width = 50;
rectVS.origin.x = 10;

NSLog(@"%@", NSStringFromCGRect(rect)); // Prints {{0, 0}, {50, 100}}
NSLog(@"%@", NSStringFromCGRect(rectVS)); // Prints {{10, 0}, {100, 100}}

// Reference syntax, memory is shared.
CGRect *rectRS = &amprect;

rect.size.width = 25;
(*rectRS).origin.x = 10;

NSLog(@"%@", NSStringFromCGRect(rect)); // Prints {{10, 0}, {25, 100}}
NSLog(@"%@", NSStringFromCGRect(*rectRS)); // Prints {{10, 0}, {25, 100}}
</code></pre>

<p>As you can see, the both have its usage and value. The thing is that on Objective-C you must know about C pointers and C data types in deep in order to correctly work with Value and Reference syntax. The new in Swift 2.0 is again the ability to infer things, so... "power to the compiler!"</p>

<p>The LLVM can infer much more based on your code and project files. It's able to determine if a variable/constant must use the value syntax or the reference syntax. By default Swift is Value Syntax-based, but you can use both styles if you want by changing the architecture of your code (like creating class wrappers) or by forcing a reference (using the inout instruction). For inout, guess what... the syntax for doing it is the same as in C, with "&amp;". Great!  </p>

<pre><code>
func fooSum(inout varA:Int, varB:Int)
{
    varA += varB
}

var a = 5
let b = 3
fooSum(&a, varB: b)
print(a)
</code></pre>

<p><br/><a name="protocol"></a> <br />
<h2><strong>Protocol Oriented</strong></h2><a href="#list_contents">top</a></p>

<p>This is the last part I want to talk about, but I think this is the most exciting thing on Swift 2.0. Guess what, again it's a compiler thing: "power to the compiler!", but this time, it's so huge, it's a completely change of paradigm and can really change our way to code things.</p>

<p>The Swift 2.0 is now being called as a Protocol Oriented Programming, in counter-part to Object Oriented Programming. Why is that? Introduced in Swift 2.0, the <b>Protocol Extension</b> is what changes everything. With the power of the <b>Where Clauses</b>, this new protocol feature enable us to think about a whole new world without classes and polymorphism.</p>

<p>First off, as a side effect the Protocol Extension eliminates the Mocks on our tests, which I consider an ugly and unnecessary BTW. Mocks sucks, Protocol Extensions are great and can do the same thing. This is enough subject to other article, for now let's focus on how this new feature can change our world.</p>

<p>You already know that Value Types in Swift is great and powerful. Because the compiler can treat Enums, Structs and Classes at the same way at a lower-level, we can create functions, properties and extensions to all these types and get the same performance at runtime.</p>

<p>Just to illustrate how powerful this feature is, consider this: In Swift standard library, Arrays, Sets and Dictionaries are struct types (actually almost all in Swift standard library are Structs with many Extensions). All these 3 types share the same protocol <code>CollectionType</code>, well this protocol is also shared by many others, like String.</p>

<p>Let's say you want to create an extension to the Arrays and Sets. In Swift 1.x or even in Objective-C you could create a category (called extension in Swift) to the Array and another one to the Set, well you could start duplicating code in this scenario. Now, with Protocol Extensions you can create a single extension to the <code>CollectionType</code> and all the classes that implement this protocol gains the new function for free.</p>

<p>But wait, you may not want to give this function to all the <code>CollectionType</code>, because Dictionaries and Strings will have it too. No problems, with the <b>Where Clause</b> (aka Constraints) you can do it in one single line of code, defining that your protocol extension will act only on collections with a single element type.</p>

<p>Notice that <b>Protocol Extension</b> is completely different than creating <b>Protocol Inheritance</b> (or Hierarchy). The Hierarchy is like subclassing a class, which is nothing new and already exist on Obj-C. Swift calls <b>Extension</b> what we were used to call <b>Category</b>, so Swift 2.0 can create concrete and abstract categories to the protocols, which is completely different than Protocol Inheritance.</p>

<p>Take a moment to think how great this new feature is, it can break paradigms and open our world of possibilities. Some may say is isn't too much and they may think about how to achieve a similar behavior, coding once, in other languages. This is not the point here. Discussing a language is free of how much glue code you could use to achieve the same behavior. Discussing a language is how its syntax and paradigms can change our way to think and code.</p>

<p>This protocol feature in Swift 2.0 is enough to drive us outside the box of polymorphism and classes. Remember, classes try to illustrate the real world, but they have many problems, like the single chain and this is not how the real world works. Languages like C++ try to give classes a little bit more power with the <b>Multiple Inheritance</b>, but... OMG... you know that this attempt is probably one of the worst features ever.</p>

<p>In short, OOP was an attempt to recreate the real world structure and of course create new programming paradigms, but this is not how the real world works. Now with POP we have a new way to think, a new way to deal with the reality and transcribe real world things into our small programming world.</p>

<p><br/><a name="conclusion"></a> <br />
<h2><strong>Conclusion</strong></h2><a href="#list_contents">top</a></p>

<p>From this article, would be nice to remember few things:  </p>

<ul>  
    <li>Swift performance comes from the compiler</li>
    <li>Swift 2.0 is about to change our paradigms</li>
    <li>Swift is the future</li>
</ul>

<p>I'll start writing a series about Swift, talking about its new features, its secrets, sharing Playgrounds and algorithms. See you there.</p>

<iframe scrolling="no" src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  width="100%" height="130px"></iframe>]]></description><link>http://blog.db-in.com/swift-2-0-is-now-better-than-objective-c/</link><guid isPermaLink="false">7e5e88d6-5464-4b5e-a0b5-86029656210c</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Thu, 25 Jun 2015 22:43:17 GMT</pubDate></item><item><title><![CDATA[Swift VS Obj-C]]></title><description><![CDATA[<p>Hello people,</p>

<p>There are a lot of talk out there about the Swift. As always, Apple reach the trend topics with its biggest announce: "a new programming language, I introduce you Swift".</p>

<p>I've seen people getting scared, screaming, jumping the window. Calm down people, my sincerely word for you is: Swift is GOOD! I'll explain here why Swift is a good thing and why you should not worry if you don't want to change.</p>

<!--more-->

<p>As Swift still under the Apple's NDA, this IS NOT a tutorial about Swift, it's just a clarify and word of relief for the afflicted ones. But as soon as it goes live, I'll immediately post LOTS of tutorials about Swift.</p>

<p>The first important thing is: Do not get confused between Apple Swift, OpenStack Swift storage (<a href='http://swift.openstack.org/' >http://swift.openstack.org</a>), another scientific project called Swift Language  (<a href='http://swift-lang.org/' >http://swift-lang.org</a>) or any other references to the name "Swift", yeah, this name is too generic.</p>

<p>Well, there are 3 myths that Apple are trying to introduce to scare people that I'll demystify here:</p>

<ol>  
<li>Swift is faster than C and Objective-C (MYTH)</li>  
<li>You can learn Swift faster than Obj-C (MYTH)</li>  
<li>You can make everything with Swift (MYTH)</li>  
</ol>

<h2><strong>1st MYTH - Swift is Faster than C and Objctive-C</strong></h2>  

<p>Apple shown this image in the WWDC 2014 keynote: <br />
<img src='http://db-in.com/images/swift-performance.jpg'  alt="Swift VS Obj-C"/> <br />
Despite the countless test I've made and saw others doing in performance field with Swift, I'll focus on something below the surface, something that really prove this image is a lie. (But just to let you know, in the performance tests, Obj-C still the winner).</p>

<p>Since the very begining of the computer science and technology, our human knowledge is limited by the physical materials. I mean, every single computer in this world operates on a plastic board, with circuits made with copper and other metals. The magic happens when very small eletrical impulses start running through the board. To these impulses there are only two possible values: or they exist (1) or they don't (0). This is the binary world, the Processor tracks those impulses countless times per second (well, of course it's not real countless, but you get the meaning).</p>

<p>The Quantum computer is a reality, but I'll not go deep in this subject, which is really the only way to take our current computer science to a new level. By changing the binary logic to another logic with -1, 0, 1 and everything between those values. Anyway...</p>

<p>So we know that working with bits is the fastest way to work with the traditional computers, because bits are the real processor's language. Now, you probably know that Obj-C as a superset of standard C and works with "pointers". I'll not go further in this subject too, but we all know that working with pointers is the way to direct slide through the bits in the memory, which means, working directly with bits. Back to the processor, nothing can be faster than working with bits in a traditional computer. So, the BEST a new language could achieve in performance is EQUAL to C, but never FASTER than C.</p>

<p>This is the first point: Nothing is faster than C in a traditional computer, just EQUAL in the best case.</p>

<p>But why Apple shown this graphic? It's a lie? Not exactly. As everything that Apple does, this image is a marketing piece intended to be used in their marketing WAR.</p>

<p>It's true that the greatest part of the developers doesn't know C and Obj-C structure in deep, which lead them to make mistakes, which lead the final app to have a poor performance. So to 90% of the apps, which are made by beginners, using Swift will avoid to make mistakes, avoiding bad performance and bad algorithims. So if you compare the app performance of an ordinary developer using Obj-C and Swift, yes, Swift will be faster. But if you compare the code of an experienced developer using Swift and Obj-C, nothing will really change, absolutely no performance gain. In fact, as many test have proved, the performance could even be worst, because Swift translation will never be smarter than a good C developer.</p>

<p>So, saying Swift is faster than Obj-C is a big LIE, but for Apple it's a marketing Weapon and will be used as that.</p>

<h2><strong>2nd MYTH - The learning curve is better in Swift</strong></h2>

<p>In a far future from now, maybe this phrase becomes true, but not now. Not even closer. You can't do everything with Swift. The Next Step legancy still until today and it's HUGE. There are so many frameworks in Apple that is using the NS base that is insane try to refactor everything. So Apple will just let it as it is.</p>

<p>Again, this means that for beginners and ordinary developers, Swift will do many things, but when you try to make something a little more sophisticated, like using the new Metal API (a 3D graphical lib, the new Apple competitor for OpenGL), you'll notice that Swift is not enough, because Metal is not even compatible with Swift. You must know and learn about Obj-C, C, MRC and everything else that Apple is trying to hide from us.</p>

<p>So, in a marketing perspective, Apple just make the first step in his development world a little bit easier, but as soon a new developer try to make a small feature similar to a great app, BAM... he will notice that there is no way to achieve that without go deep in the Obj-C world.</p>

<p>As soon as you start mixing Obj-C and Swift, you will find how exponential this learning curve can be. You'll need to learn two languages at once and learn how to mix them. As a marketing strategy for Apple, this can be dangerous.</p>

<h2><strong>3rd MYTH - You can make everything with Swift</strong></h2>

<p>As a consequence of what we discussed above, Swift is far from ready to do everything that Obj-C and C does. It does not have all the frameworks and its integration with Obj-C is not really a FULL integration.</p>

<p>Let me explain: the communication between Obj-C and Swift is made using a pre-compiled code that is taken by Xcode as Swift code. Apple called this feature of "Objective-C Bridiging", which is nothing else than a header for pre-compiling your Objective-C code. This means the Old C and the new Swift can live within the same project, but not within the same file as Obj-C and C could. (BTW, Swift has a new file extension ".swift", Apple always like to use long extension files, instead 2 or 3 letters). The backward already exist, Xcode automatically put a suffix in all your Swift classes and objects calling them as "<name>-Swift".</p>

<p>Do you know why this "Objective-C Bridiging" can exist? Because the Apple compiler LLVM 6.0 converts all the Swift code to basic Obj-C/C code during the compilation. Yeah! So your app on iPhone still running the same kind of runtime, the same compiled code, Apple didn't change anything down there. This is another clue about the first MYTH (about Swift not being faster than Obj-C in real).</p>

<p>Just one more word about this subject, we can totally understand why Apple didn't change and probably will not change anything on runtime. I've been through few runtime changes in my life and I can say, it's very painful. AS2 to AS3 (Action Script), changing the runtime means completely give up of the current applications, means refactoring the entire project, which can kill thousands of business and companies. The old ASP.NET to .NET C#, these change was much more painfull than Flash.</p>

<h2><strong>Conclusion</strong></h2>

<ul>  
<li>Is Swift cool? YES</li>  
<li>Will Swift grow fast? Probably</li>  
<li>Learning Swift will be fast? YES</li>  
<li>So can we give up Obj-C? NO</li>  
</ul>

<p>This is the bad news for beginners, they will must learn two languages instead of one. Doing ordinary apps anyone can do, PhoneGap and Titanium can do ordinary apps. But I'm not talking about this level, I'm talking about the real development, the one that can bring you awards, the one that can change things, the special ones, TOP apps. For this level, you can totally give up of Swift for now.</p>

<iframe src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  height="130" width="100%" scrolling="no"></iframe>]]></description><link>http://blog.db-in.com/swift-vs-obj-c/</link><guid isPermaLink="false">8f596da2-4829-42a3-98a9-93657df0f899</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Mon, 09 Jun 2014 06:33:45 GMT</pubDate></item><item><title><![CDATA[Universal Framework for iOS]]></title><description><![CDATA[<p><img src='http://db-in.com/images/framework_ios_2.jpg'  alt="" title="framework_ios_2" width="200" height="200" class="alignright size-full" />Hello my friends,</p>

<p>Due to some bugs and questions with the old tutorial, I'm creating this new one, much more simpler and less bugs than the another one. I'll not post the old link here because everything you need to know you can find right here.</p>

<p>Nowadays, exist few alternatives to create a Framework to iOS, changing the default Xcode Script, which could not be a good choice if you want to publish the APPs constructed with your custom Framework. I'll treat here about how to construct an Universal Framework to iOS, using the default tools from Xcode.</p>

Let's start!  
<!--more-->

<p><a name="list_contents"></a> <br />
Here is a little list of contents to orient your reading:  </p>

<table width="675">  
<tr>  
<th colspan=2>List of Contents to this Tutorial</th>  
</tr>  
<tr><td valign="top">  
<ul>  
    <li><a href="#framework_ios">Framework on iOS? Really?</a></li>
    <li><a href="#understanding">Understanding Universal, Dynamic and Static concepts</a></li>
    <li><a href="#framework_project">Constructing a Framework Project</a>
        <ul>
            <li><a href="#step_1">1. Create the Project</a></li>
            <li><a href="#step_2">2. Framework Classes</a></li>
        </ul></li>
    <li><a href="#creating_framework">Creating the Framework</a>
        <ul>
            <li><a href="#step_3">3. Create a Framework Target</a></li>
            <li><a href="#step_4">4. Bundle Setup</a></li>
            <li><a href="#step_5">5. Adding code and resources to the Bundle (Framework)</a></li>
        </ul></li>
    <li><a href="#building_universal">Building the Universal Framework</a>
        <ul>
            <li><a href="#step_6">6. Creating Universal Target</a></li>
            <li><a href="#step_7">7. Lipo Tool Script</a></li>
        </ul></li>
    <li><a href="#importing">Importing your Universal Framework</a>
        <ul>
            <li><a href="#step_8">8. Importing</a></li>
        </ul></li>
    <li><a href="#conclusion">Conclusion</a></li>
</ul>  
</td></tr>  
</table>

<p>You can download the template instead of doing it manually:</p>

<p><a href='https://dl.dropboxusercontent.com/u/6730064/ios_universal_framework_template.zip'  onmousedown="_gaq.push(['_trackEvent', 'Framework iOS', 'Xcode', 'Download']);"><img class="alignleft" title="download" src='http://db-in.com/imgs/download_button.png'  alt="Download Xcode Template"/> <br />
<strong>Download now</strong> <br />
Xcode Template <br />
</a><br/></p>

<p>Unzip it and place it at <b>/Library/Developer/Xcode/Templates/Project Templates/</b> <br />
(create the path it needed)</p>

<p><br/><a name="faq"></a> <br />
<h2><strong>FAQ</strong></h2><a href="#list_contents">top</a> <br />
First off, I want to make sure you understand what this Framework to iOS can do, this can safe your time reading this article:  </p>

<ol>  
    <li><strong>Can I use this Framework as a Bundle to store my files, XIBs, images?</strong>
A: Yes, you can and now it's very very easy to retrieve your files.</li>

    <li><strong>Can I use this Framework to import other Frameworks, like import UIKit, CoreGraphics, OpenGL?</strong>
A: No. There is no way to do that. Your code in this custom Framework can import classes from other frameworks normally, but it is just a reference, classes from another framework will not be compiled at this time. So you must import the referenced Framework on the new project as well.</li>

    <li><strong>Will be my code visible to others?</strong>
A: No. This Framework will export a compiled binary, so anyone can see inside it. You can make the same for some other files, like XIBs.</li>

    <li><strong>Why I need this?</strong>
A: This is for developers/teams that want to share their codes without shows the entire code (.m/.c/.cpp files). Besides this is for who want to organize compiled code + resources (images, videos, sounds, XIBs, plist, etc) into one single place. And this is also for that teams that want to work together above the same base (framework).</li>  
</ol>

<p><br/><a name="framework_ios"></a> <br />
<h2><strong>Framework on iOS? Really?</strong></h2><a href="#list_contents">top</a> <br />
<img src='http://db-in.com/images/framework_icon.png'  alt="" title="framework_icon" width="128" height="128" class="alignleft size-full wp-image-1397" />Ok buddies, let's make something clear, many people had said:"iOS doesn't support custom Frameworks!", "Custom Framework is not allowed at iOS!", "Doesn't exist custom Framework on iOS!" and many other discouraging things like these. Look, I've made many frameworks and worked with many others, I don't believe that is really impossible to use a Framework on iOS. According to my experience and knowledge about Frameworks, it's absolutely feasible a custom Framework on iOS Devices. If we think more about this issue we can find an elegant solution, right? First, let's understand what a Framework really is, here is the definition of framework by Apple's eyes:</p>

<blockquote>A framework is a hierarchical directory that encapsulates shared resources, such as a dynamic shared library, nib files, image files, localized strings, header files, and reference documentation in a single package.</blockquote>

<p>Doesn't make many sense something with this description not be allowed in iOS, thinking in architecture and structure. Apple also says:</p>

<blockquote>A framework is also a bundle and its contents can be accessed using Core Foundation Bundle Services or the Cocoa NSBundle class. However, unlike most bundles, a framework bundle does not appear in the Finder as an opaque file. A framework bundle is a standard directory that the user can navigate.</blockquote>

<p>Good, now thinking about iOS security, performance and size, the only thing in a Framework definition which doesn't fit in iOS technology is the "dynamic shared library". The words "dynamic" and "shared" are not welcome in the iOS architecture. So the Apple allows us to work and distribute something called "<strong>Static Library</strong>". I don't like it! It's not so easy as a Cocoa Framework, if a developer got your Static Library, he needs to set a header path too, or import the header files... it's not useful, that's a shit!</p>

<p><img src='http://db-in.com/images/static_library_no.png'  alt="" title="static_library_no" width="128" height="128" class="alignright size-full wp-image-1403" /></p>

<p>Well, so a Framework concept is absolutely compatible with iOS, except by the "dynamic shared library", on the other hand Apple says that a "static library" is OK for iOS. So if we replace the "dynamic shared libraries" by a "static library" we could construct a Custom Framework to iOS, right?</p>

<p>Right!!!!</p>

<p>This is exactly what we'll make in this article, let's construct a Framework with Static Library, by the way, an Universal Framework.</p>

<p><br/><a name="understanding"></a> <br />
<h2><strong>Understanding Universal, Dynamic and Static concepts</strong></h2><a href="#list_contents">top</a> <br />
Simple answer:  </p>

<ul>  
    <li>Universal: Which works perfect on all architectures. iOS devices uses <strong>armv6</strong> and <strong>armv7</strong>, iOS simulator on MacOS X uses <strong>i386</strong>.</li>
    <li>Dynamic: The compiler doesn't include the target files directly. The classes/libraries are already pre-compiled (binary format) and lies on the system path. Besides, the dynamic libraries can be shared by many applications. This is exactly what Cocoa Framework is.</li>
    <li>Static: It represents that classes/libraries which is compiled by the compiler at the build phase. These files can't be shared by other applications and relies on application path.</li>
</ul>

<p>Simple as that. If you need more informations about Dynamic VS Static libraries, try this <a href='http://developer.apple.com/library/mac/' #documentation/DeveloperTools/Conceptual/DynamicLibraries/100-Articles/OverviewOfDynamicLibraries.html" target="_blank">Apple's Documentation</a>.</p>

<p>No more concepts, hands at work!</p>

<p><br/><a name="framework_project"></a> <br />
<h2><strong>Constructing a Framework Project</strong></h2><a href="#list_contents">top</a> <br />
<a name="step_1"></a><h3>1. Create the Project:</h3><a href="#list_contents">top</a> <br />
I want to show you step by step of the entire process, so let's start with the most basic, create an iOS project. You can choose one application template in Xcode, this is not really important, but remember to choose one template which could test your Framework code before export it. <br />
<img src='http://db-in.com/images/xcode_new_project.jpg'  alt="Create an application project." title="xcode_new_project" width="600" class="size-full wp-image-1399" /></p>

<p><br/><a name="step_2"></a> <br />
<h3>2. Framework Classes:</h3><a href="#list_contents">top</a> <br />
<img src='http://db-in.com/images/xcode_project_navigator.jpg'  alt="Create your framework classes." title="xcode_project_navigator" width="257" height="286" class="size-full wp-image-1400" /></p>

<p>Remember to create an "import header" to make everything simpler and organized to the user of your framework. Remember to write this header file with a framework notation, just as shown in the image bellow. Also remember to create your classes taking care to hide the classes which should not be visible to the other developers (users of your framework). We will set the public and private headers soon, but it's very important to you protect the "core" classes, I mean, that classes which you don't want to make visible to other developers.</p>

<p>For those private classes, you could import their header inside the "<strong>.m</strong>" (or .mm, .cpp, etc) file of a public class, by doing this you protect the header of private classes. Well, I know you probably already know that, I'm saying just to reinforce.</p>

<p>Remember that organization is 90% of a good framework, so try follow all the Apple advices to create your classes names, methods, properties, functions, etc.</p>

<p><br/><a name="creating_framework"></a> <br />
<h2><strong>Creating the Framework</strong></h2><a href="#list_contents">top</a> <br />
<a name="step_3"></a><h3>3. Create a Framework Target:</h3><a href="#list_contents">top</a> <br />
OK, let's create a target to compile our framework. Click on the icon of your project in the project navigator at the left and hit the button "Add Target". A new window will come up. Now is our first trick. Instead to create a "Cocoa Touch Static Library" or a "Cocoa Framework" we will create a "Bundle" target.</p>

<p>A Bundle? Really? Yes! I can explain. A "Cocoa Framework" target can't be compiled to armv6/armv7 and Xcode doesn't allow us to use "Static Libraries" in a "Cocoa Framework", so we can't use this target. On the other hand, we can't use "Cocoa Touch Static Library" either, because it doesn't use the framework structure that we want.</p>

<p>Now, the <strong>Bundle</strong> target could be the best choice. It can hold any file we want, we can compile source code inside it and... we can turn it into a framework. To say the truth, almost all "Framework &amp; Library" targets could be turned into a framework too, even the "Cocoa Touch Static Library", throughout this article you probably will figure out how. For now, let's create a <strong>Bundle</strong> target.</p>

<p><img src='http://db-in.com/images/xcode_new_target.jpg'  alt="Create a Bundle target rather than Cocoa Touch Static Library." title="xcode_new_target" width="600" class="size-full wp-image-1402" /></p>

<p><br/><a name="step_4"></a> <br />
<h3>4. Bundle Setup:</h3><a href="#list_contents">top</a> <br />
It's time to make all the necessary changes to the Bundle target. Different than the old tutorial. You don't need to clean up anything. Just know that everything else will be ignored (linked frameworks .plist files, .pch etc...).</p>

<p>I'm sure you already know this, but just to reinforce, here is the Build Setting screen, you can find it by clicking on the project icon in the left project navigator and then clicking in the "Build Setting" tab. <br />
<img src='http://db-in.com/images/xcode_build_settings.jpg'  alt="You must make a special Build Setting to turn a Bundle into a framework." title="xcode_build_settings" width="600" class="size-full wp-image-1405" /></p>

<p>Here is our second great trick, or should be better to say "tricks". Let's change the "<strong>Build Setting</strong>" following this list:  </p>

<ul>  
    <li><strong><em>Base SDK</em></strong>: <span style="color: #3366ff;"><strong>Latest iOS (iOS X.X)</strong></span> (in the X.X will appear the number of the lastest iOS SDK installed on your machine).</li>
    <li><strong><em>Architectures</em></strong>: <span style="color: #3366ff;"><strong>$(ARCHS_STANDARD_32_BIT) armv6</strong></span> (it’s very important to be exactly this value including the space before "armv6") This setting is valid to Xcode 4.2, if you are using an old version, use the "Standard (armv6 armv7)" option. (the values for this property depend on the value of the item bellow, so set that first).</li>
    <li><strong><em>Build Active Architecture Only</em></strong>: <span style="color: #3366ff;"><strong>NO</strong></span> (otherwise we can't compile to armv6 and armv7 at the same time).</li>
    <li><strong><em>Valid Architecture</em></strong>: <span style="color: #3366ff;"><strong>$(ARCHS_STANDARD_32_BIT)</strong></span> (it's very important to be exactly this value). If your Xcode is showing two lines with armv6 and armv7, delete then and insert this value in one single line.</li>
    <li><strong><em>Dead Code Stripping</em></strong>: <span style="color: #3366ff;"><strong>NO</strong></span>.</li>
    <li><strong><em>Link With Standard Libraries</em></strong>: <span style="color: #3366ff;"><strong>NO</strong></span>.</li>
    <li><strong><em>Mach-O Type</em></strong>: <span style="color: #3366ff;"><strong>Relocatable Object File</strong></span>. This is the most important change. Here, we instruct the compiler to treat the Bundle as a relocatable file, by doing this, we can turn it into a framework with the wrapper setting.</li>
    <li><strong><em>Other Linker Flags</em></strong>: This setting is not mandatory, but if you are planning to use any kind of C++ code (.cpp or .mm) on this framework, Chris Moore (on the comments) advises to use the "-lstdc++" option. In this case could be a good idea to use "-ObjC" too, to avoid conflicts in old compilers.</li>
    <li><strong><em>Wrapper Extension</em></strong>: <span style="color: #3366ff;"><strong>framework</strong></span>. Here we change the Bundle to a Framework. To Xcode, frameworks is just a folder with the extension <em>.framework</em>, which has inside one or more compiled binary sources, resources and some folders, a folder, usually called Headers, contains all the public headers.</li>
    <li><strong><em>Generate Debug Symbols</em></strong>: <span style="color: #3366ff;"><strong>NO</strong></span> (this is a very important setting, otherwise the framework will not work on other computers/profiles).</li>
    <li><strong><em>Precompile Prefix Header</em></strong>: <span style="color: #3366ff;"><strong>NO</strong></span>.</li>
    <li><strong><em>Prefix Header</em></strong>: <span style="color: #3366ff;"><strong>""</strong></span>. (Leave it blank).</li>
</ul>

<p><span style="color: #ff0000;"><strong>IMPORTANT:</strong> Since the Xcode 4.x the architectures armv6 has no longer support. So, to create a real Universal Framework we must make a small "hack":</span>  </p>

<ol>  
    <li>After change the settings above close the Xcode, find the .xcodeproj (the project file) in Finder and then "Show Package Contents".</li>
    <li>Open the file "project.pbxproj" into a text editor.</li>
    <li>Delete all the lines with VALID_ARCHS = “$(ARCHS_STANDARD_32_BIT)”.</li>
</ol>

<p><br/><a name="step_5"></a> <br />
<h3>5. Adding code and resources to the Bundle (Framework)</h3><a href="#list_contents">top</a> <br />
It's time to place the content in our framework and define the public headers. To do that, with the Bundle target selected, click on the "<strong>Build Phase</strong>" tab. At bottom, hit the button "<strong>Add Phase</strong>" and then "<strong>Add Copy Headers</strong>".</p>

<p>Open the recently created "<strong>Copy Headers</strong>" section and separate your public headers from the private or project headers. The difference here is:  </p>

<ul>  
    <li>Public: Headers that other developers must know in order to work with your framework. In the final framework product, these headers will be visible even to Xcode.</li>
    <li>Private: Headers that is not necessary to other developers, but is good for consult or for reference. These headers will not be visible to Xcode, but will be in the framework folder.</li>
    <li>Project: Headers that the other developers nor Xcode have access. In reality these headers will not be placed in the final product, this is just to instruct the compiler to create your custom framework.</li>
</ul>

<p>Now, open the "<strong>Compile Source</strong>" section and put there all your <strong>.m</strong>, <strong>.c</strong>, <strong>.mm</strong>, <strong>.cpp</strong> or any other compilable source file.</p>

<p>If your framework include not compilable files, like images, sounds and other resources, you can place them in the "<strong>Copy Bundle Resources</strong>" section. Later, when we generate the final framework, all your resources will be placed in a folder called "Resources", but you can change it. That folder is very important, because it will be part of the path to retrieve your resources from the framework product.</p>

<p><span style="color: #3366ff;"><strong>Tip:</strong> To add many files at once, click on the “+” button and write the files’ extension on the search field. For example “.m”, “.c”, “.cpp”, “.h”, etc. This can save a lot of time.</span></p>

<p>This is how your "<strong>Build Phase</strong>" will looks like: <br />
<img src='http://db-in.com/images/xcode_copy_headers.jpg'  alt="Define your compilable source and the headers." title="xcode_copy_headers" width="600" class="size-full wp-image-1406" /></p>

<p><br/><a name="building_universal"></a> <br />
<h2><strong>Building the Universal Framework</strong></h2><a href="#list_contents">top</a> <br />
<a name="step_6"></a><h3>6. Creating Universal Target:</h3><a href="#list_contents">top</a> <br />
To join both architectures products into one, we must to use the <strong>Lipo Tool</strong>. It's a tool which comes with iOS SDK, just to know, it is in "<Xcode Folder>/Platforms/iPhoneOS.platform/Developer/usr/bin", its file name is "lipo". But we don't need to know this path, Xcode can deal with it to us.</p>

<p>Add a new target, hit the "<strong>Add Target</strong>" button, just as you did with Bundle Target. At this time a good choice is the "Aggregate" target. It doesn't create any product directly, its purposes is just to aggregate another targets and/or run some scripts, exactly what we want! To use the Lipo Tool we'll need to create a "<strong>Run Script</strong>" at the "<strong>Build Phase</strong>".</p>

<p><img src='http://db-in.com/images/xcode_build_all.jpg'  alt="Use the &quot;Aggregate&quot; target to construct a run script." title="xcode_build_all" width="600" class="size-full wp-image-1409" /></p>

<p><br/><a name="step_7"></a> <br />
<h3>7. Lipo Tool Script:</h3><a href="#list_contents">top</a> <br />
This will be our greatest trick. The following script will make everything we need. It will compile the Framework Target to iOS Device and Simulator at once, will merge them with Lipo tool and will organize a good Framework Bundle structure.</p>

<p>Copy and paste this on your "Run Script" phase:</p>

<table width="675">  
<tbody>  
<tr>  
<th>Xcode Script to Lipo Tool</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">  
# Sets the target folders and the final framework product.
FMK_NAME="FI"  
FMK_VERSION="A"

# Install dir will be the final output to the framework.
# The following line create it in the root folder of the current project.
INSTALL_DIR=${SRCROOT}/Products/${FMK_NAME}.framework

# Working dir will be deleted after the framework creation.
WRK_DIR=build  
DEVICE_DIR=${WRK_DIR}/Release-iphoneos/${FMK_NAME}.framework  
SIMULATOR_DIR=${WRK_DIR}/Release-iphonesimulator/${FMK_NAME}.framework

# Building both architectures.
xcodebuild -configuration "Release" -target "${FMK_NAME}" -sdk iphoneos  
xcodebuild -configuration "Release" -target "${FMK_NAME}" -sdk iphonesimulator

# Cleaning the oldest.
if [ -d "${INSTALL_DIR}" ]  
then  
rm -rf "${INSTALL_DIR}"  
fi

# Creates and renews the final product folder.
mkdir -p "${INSTALL_DIR}"  
mkdir -p "${INSTALL_DIR}/Versions"  
mkdir -p "${INSTALL_DIR}/Versions/${FMK_VERSION}"  
mkdir -p "${INSTALL_DIR}/Versions/${FMK_VERSION}/Resources"  
mkdir -p "${INSTALL_DIR}/Versions/${FMK_VERSION}/Headers"

# Creates the internal links.
# It MUST uses relative path, otherwise will not work when the folder is copied/moved.
ln -s "${FMK_VERSION}" "${INSTALL_DIR}/Versions/Current"  
ln -s "Versions/Current/Headers" "${INSTALL_DIR}/Headers"  
ln -s "Versions/Current/Resources" "${INSTALL_DIR}/Resources"  
ln -s "Versions/Current/${FMK_NAME}" "${INSTALL_DIR}/${FMK_NAME}"

# Copies the headers and resources files to the final product folder.
cp -R "${DEVICE_DIR}/Headers/" "${INSTALL_DIR}/Versions/${FMK_VERSION}/Headers/"  
cp -R "${DEVICE_DIR}/" "${INSTALL_DIR}/Versions/${FMK_VERSION}/Resources/"

# Removes the binary and header from the resources folder.
rm -r "${INSTALL_DIR}/Versions/${FMK_VERSION}/Resources/Headers" "${INSTALL_DIR}/Versions/${FMK_VERSION}/Resources/${FMK_NAME}"

# Uses the Lipo Tool to merge both binary files (i386 + armv6/armv7) into one Universal final product.
lipo -create "${DEVICE_DIR}/${FMK_NAME}" "${SIMULATOR_DIR}/${FMK_NAME}" -output "${INSTALL_DIR}/Versions/${FMK_VERSION}/${FMK_NAME}"

rm -r "${WRK_DIR}"  
</pre>

<p>Now build the Aggregate target. Doesn't matter you build for iOS Device or Simulator, this script will create a working folder, compile the framework target twice in there (device + simulator) and will output a folder called "<strong>Products</strong>" located in the project root folder. There is your <strong>Universal Framework to iOS</strong>!</p>

<p><strong>Congratulations!</strong></p>

<p><br/><a name="importing"></a> <br />
<h2><strong>Importing your Universal Framework</strong></h2><a href="#list_contents">top</a> <br />
<a name="step_8"></a><h3>8. Importing:</h3><a href="#list_contents">top</a> <br />
To test your Universal Framework, create a new Xcode project, select the Application target and go to "<strong>Build Phase</strong>" tab. Open the section "<strong>Link Binary With Libraries</strong>" and hit the "<strong>+</strong>" to add a new Framework. Click the "<strong>Add Other...</strong>" button and select your Universal Framework. Remember, you must to select the "<strong>.framework</strong>" folder. Remember to import your Framework Principal Header as a framework notation. Xcode will use your public headers in the Code Completion.</p>

<p>Let's understand what happens until here: <br />
When we set the "Mach-O Type" to "Relocatable Object File" the Xcode understand that everything related to that package will be like a "Binary Archive" like a ZIP. But that archive must be compiled again in new projects. <br />
<img src='http://db-in.com/images/xcode_bundle_import.jpg'  alt="" title="xcode_bundle_import" width="298" height="262" class="alignleft size-full wp-image-1445" /> <br />
Then, when we create a Framework Bundle structure, Xcode understand that everything inside it is organized in folders like "Headers" and <CompiledCode>. But, as any other external bundle, to retrieve the resources you must load the external bundle. To iOS this could be an annoying step, however our Framework structure can help. Just Click and Drag on your Framework icon from "<strong>Project Navigator</strong>" to the "<strong>Copy Bundle Resources</strong>". By doing this all the resources in your Framework will be copied to your Application Main Bundle.</p>

<p>Now, to retrieve the resources, make use of the main bundle, just as you are used to:  </p>

<table width="675">  
<tbody>  
<tr>  
<th>Framework Bundle</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">  
[[NSBundle mainBundle] pathForResource:@"FI.framework/Resources/FileName"
                                ofType:@"fileExtension"];
</pre>

<p><br/><a name="conclusion"></a> <br />
<h2><strong>Conclusion</strong></h2><a href="#list_contents">top</a> <br />
Well done, my friends! As we are used, let's make a final review and take care with some possible problems.  </p>

<ul>  
    <li>In a common Xcode project, create a Bundle target.</li>
    <li>Make the necessary setup, place your sources, headers and resources in it.</li>
    <li>Create an Aggreate target and place a <strong>Run Script</strong> in it.</li>
</ul>

<p>One last advice: Take care with your classes structure. If you set, for example, the ClassB.h as a Project or Private header, but in your code you import it into a Public header, this will cause conflicts.</p>

<p>And one last tip: Notice in the sample project I removed the scheme for the "Bundle Target". We don't need that scheme any more, because the new script will manage the compilation to us.</p>

<p>That's all, buddies. <br />
Enjoy your Framework to iOS!</p>

<p>Thanks for reading, <br />
See you soon!</p>

<iframe scrolling="no" src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  width="100%" height="130px"></iframe>]]></description><link>http://blog.db-in.com/universal-framework-for-ios/</link><guid isPermaLink="false">e309adbb-1a01-4654-9405-4ddb5c3af447</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Wed, 05 Feb 2014 14:00:00 GMT</pubDate></item><item><title><![CDATA[All about Shaders - (part 1&#x2F;3)]]></title><description><![CDATA[<p><img src='http://db-in.com/images/glsles_featured.jpg'  alt="" title="Binary world" width="200" height="200" class="alignleft size-full" />Hello again, my friends!</p>

<p>Let's start a new series of tutorials, at this time let's go deep in shaders universe, the most exciting part of OpenGL programmable pipeline. We'll treat about textures, lights, shadows, per-pixel effects, bump, reflections and more.</p>

<p>This series is composed by 3 parts:  </p>

<ul>  
    <li>Part 1 - Basic concepts about GLSL ES (Beginners)</li>
    <li>Part 2 - Shaders Effects (Intermediate)</li>
    <li>Part 3 - Mastering effects with OpenGL Shader Language (Advanced)</li>
</ul>  

<!--more-->  

<p><a name="list_contents"></a> <br />
Here is a little list of contents to orient your reading:  </p>

<table width="675">  
<tr>  
<th colspan=2>List of Contents to this Tutorial</th>  
</tr>  
<tr><td valign="top">  
<ul>  
    <li><a href="#shading_types">Shading Types</a></li>
    <li><a href="#opengl_shaders">OpenGL Shaders</a></li>
    <li><a href="#normal">Normal Vector</a>
        <ul>
            <li><a href="#smooth_angle">Normal's Smooth Angle</a></li>
        </ul></li>
    <li><a href="#tangent">Tangent Space</a></li>
    <li><a href="#texcoord">Texture Coordinates</a></li>
    <li><a href="#conclusion">Conclusion</a></li>
</ul>  
</td></tr>  
</table>

<p><br/>  </p>

<h2><strong>At a glance</strong></h2>  

<p><img src='http://db-in.com/images/glsles_featured.jpg'  alt="" title="glsles_featured" width="300" height="300" class="alignleft size-medium wp-image-1453" />We'll study in-depth the shaders language (more specifically the GLSL ES, the shader language for Embedded Systems) and let's make great effects using the shaders like specular lights and reflections, bump maps, refractions, reflections and more.</p>

<p>In this first part I'll cover the basic concepts sobre shaders and GLSL ES. We'll need to drill deep in something called Tangent Space, which is an intermediate level, so I'll create an article between the part 1 and 2 specific to treat Tangent Space concepts and its creation.</p>

<p>In the second part let's start creating some interesting effects like specular lights, reflections and refractions. Besides, on the second one let's create the environment mappings by using the cube texture.</p>

<p>Finally in the last part let's make the most advanced effect, the bump mapping and see what is the difference between Normal Mapping, Bump Mapping and Parallax Mapping.</p>

<p>Hands to work!</p>

<p><br/><a name="shading_types"></a> <br />
<h2><strong>Shading Types</strong></h2><a href="#list_contents">top</a> <br />
First off, we need to understand the evolution of that we call shader. Today we have many computations on the GPU and several shader techniques that achieve really good results, but how did we get here?</p>

<p>Once upon a time, there was a single shading technique, called <strong>Flat Shading</strong>. It defines that light is computed with the normal vector and each FACE of the mesh has a normal vector. The term "FACE" here means a polygon (usually a triangle).</p>

<p><img src='http://db-in.com/images/shader_techniques_flat.jpg'  alt="Flat Shading" title="shader_techniques_flat" width="600" height="503" class="size-full wp-image-1449" /></p>

<p>With the evolution of the hardwares we start to process values for each vertex, this improvement took us into a new level, making more light effects. This shading technique was called <strong>Goraud Shading</strong>.</p>

<p><img src='http://db-in.com/images/shader_techniques_goraud.jpg'  alt="Goraud Shading" title="shader_techniques_goraud" width="600" height="503" class="size-full wp-image-1450" /></p>

<p>Then we discovered something that produces really nice results, the <em>interpolation</em>. It's a process which we calculate all the intermediate points between two other points. We use it all the time. For example, if a point A has a texture coordinate U:1 V:2 and a point B has U:2 V:5, the interpolation will calculate all the middle points from 1,2 to 2,5. This is know as <strong>Phong Shading</strong>.</p>

<p><img src='http://db-in.com/images/shader_techniques_phong.jpg'  alt="Phong Shading" title="shader_techniques_phong" width="600" height="503" class="size-full wp-image-1451" /></p>

<p>But for many reasons, to simulate the real world, we need more than a linear interpolation. Surfaces have many details that change the way the light and shadow reacts. So we discovered a way to produce infinities values over a surface using less processing time. This technique was called <strong>Bump Shading</strong>.</p>

<p><img src='http://db-in.com/images/shader_techniques_bump.jpg'  alt="Bump Shading" title="shader_techniques_bump" width="600" height="503" class="size-full wp-image-1452" /></p>

<p>Today we have some advanced Bump's techniques, but all of them have the same basic concept: store each value surface deformation value as a set of RGB color. So, basically, the Bump Shading takes advantage of a texture map that store coordinates within a RGB format. The coordinates in there can be used for many things: Normals, Tangents, Bitangents, Vertices positions or anything else we want. Usually the textures for <strong>Bump Shading</strong> is called Normal Maps, because we store the Normal Vector value in it.</p>

<p>By default, OpenGL uses the Phong Shading, making the interpolations between the Vertex Shader's outputs and Fragment Shader's inputs. This information is very important, so I'll repeat: "Vertex Shader's outputs are interpolated to Fragment Shader's inputs". Technically, this is what happens:</p>

<table style="text-align:center;">  
  <tr>
    <th colspan="3">Interpolation Table</th>
  </tr>
  <tr>
    <th>&nbsp;</th>
    <th>Vertex Shader</th>
    <th>Fragment Shader</th>
  </tr>
  <tr>
    <th>Vertex A</th>
    <td>0.0</td>
    <td>0.0</td>
  </tr>
  <tr>
    <th>&nbsp;</th>
    <td>-</td>
    <td>0.25</td>
  </tr>
  <tr>
    <th>&nbsp;</th>
    <td>-</td>
    <td>0.50</td>
  </tr>
  <tr>
    <th>&nbsp;</th>
    <td>-</td>
    <td>0.75</td>
  </tr>
  <tr>
    <th>Vertex B</th>
    <td>1.0</td>
    <td>1.0</td>
  </tr>
</table>

<p><br/><a name="opengl_shaders"></a> <br />
<h2><strong>OpenGL Shaders</strong></h2><a href="#list_contents">top</a> <br />
Here is a brief review about the shaders that we've seen on previous tutorials:  </p>

<ul>  
    <li>Shader is the way to calculate everything related to our 3D objects by our own (from the vertices positions to the most complex light equations).</li>
    <li>Vertex Shaders (VSH) are processed one time for each object's vertex. Fragment Shader (FSH) are processed one time for each fragment (not necessarely a pixel) of the visible object (<a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-1'  title="All about OpenGL ES (2/3)" target="_blank">http://blog.db-in.com/all-about-opengl-es-2-x-part-1</a>).</li>
    <li>You can set constant values for Uniforms to work throughout the VSH and FSH processing (<a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-2'  title="All about OpenGL ES (2/3)" target="_blank">http://blog.db-in.com/all-about-opengl-es-2-x-part-2</a>).</li>
    <li>Dynamic variables can be assigned only to the Attribute kind, which is exclusive of VSH. You can send a variable from VSH to FSH via Varyings, but remember that those values will be <span style="color:#FF0000"><strong>interpolated</strong></span>!</li>
</ul>

<p>The shaders are small pieces of code that will be processed directly in the GPU. Unfortunately, even in these days (2011) our hardwares are very poor and slow if compared with the real amount of calculations that exist in the real world. The most advanced hardware trying to calculate a real phenomenon of the sun light passing through the water in a pool could take days to calculate a single frame. In the other hand, the real nature calculates all physical phenomenon instantly (OK, the mother nature makes a kind of "calculation", not exactly an equation).</p>

<p>While we stay on the "Era of Bits" we can't try to calculate the real phenomenons (I wrote an article about the "<a href='http://blog.db-in.com/binary-world/'  target="_blank">Binary World</a>" where I talked about the new Era of Quantum computers, maybe there, in the "Era of Quantums", we'll be able to reproduce our 3D world closest from the reality).</p>

<p>Anyway, what I meant is that the shaders are something trying to reproduce the real world with a very very abstractly code, making a bunch of simplifications of the real world. So if you want to mastering the shader, you must learn to extract some abstract pieces of code from the real world phenomenons. But don't worry too much about it now, in the right time you'll see that it's an easy task and can be cool as well.</p>

<p>It's important to start thinking in the concept of "Shader Program". It's a set of 2 (and only 2) shaders: a vertex shader and a fragment shader. So, we must think in the render as 2 different steps (vertex and fragment). Usually the Fragment shader is processed a lot of times more than the Vertex one. If a mesh has 10.000 vertices that means its Vertex Shader will generate 10.000 outputs to the Fragment Shader. Remember that those outputs will always be interpolated to the Fragment processing. So, to increase the performance we always try to place hard calculus in the Vertex Shader. Obviously there are calculus that we can't accept interpolation to their values, like the bump effect, just in these cases we make the calculus inside the Fragment Shader.</p>

<p>However, make sure you got the correct distinction between the concept of making the calculus inside the Vertex Shader and another thing called Per-Vertex/Per-Fragment Light. We'll see those concept in-depth later on, but just to clarify:  </p>

<ul>  
    <li><strong>Per-Vertex Light</strong> means you have all the light calculus inside the Vertex Shader and them you interpolate the result to Fragment Shader.</li>
    <li><strong>Per-Fragment Light</strong> means you have all last light calculus (the output value) inside the Fragment Shader, independent if the first steps was made in the Vertex or in the Fragment Shader.</li>
</ul>

<p>The interpolation happens on all Vertex Attributes, like the Texture Coordinates. As shown above in the Interpolation Table, the interpolated values from a texture coordinate can retrieve all the pixels from a texture. The Texture Coordinates usually is defined by a technique called "UV Map" or "UV Unwrap Map", which is an artistic job, actually is almost impossible to create detailed UV Maps only with the code. Often a professional 3D software export the Texture Coordinates values within the model coordinates, based on definitions from an user friendly UV editor.</p>

<p>But there is another per-vertex Attribute very important to 3D world. With the shaders we calculate lights, shadows, reflections, refractions and any other effects we want. All of them need something in common, a <strong>Normal</strong> vector.</p>

<p><br/><a name="normal"></a> <br />
<h2><strong>Normal Vector</strong></h2><a href="#list_contents">top</a> <br />
This is one of the most important per-vertex attributes and its concept is very easy to understand. In the real world, basically, there are two things that can alter how the light rays affect a surface: the material (reflectiveness, refraction, specularity, shininess, etc.) and the surface's angle. Well, actually, the point of view (the viewer's eyes) also affect how we see the light, but let's focus on the first two things. The normal vector is related to the surface's angle. As the performance is something crucial to us, instead to re-calculate the angle of each surface (triangle) at every shader processing, we are used to pre-calculate a normal vector to every surface (triangle).</p>

<p>In simple words, the normal vector is an unit vector (magnitude equals to 1.0 and all axis range vary between [0.0, 1.0]) which represents the surface's angle. Nice, now, how we can calculate it?</p>

<p>Well, that's not an easy task, my friend. I had to read/watch/try a bunch of tutorials until I found the right formula. There are many people trying to teach how to calculate the normals. Some say that you must calculate per-face normals and store them into a buffer, others say to calculate the averaged normals between adjacent faces, some even say that you need to calculate each surface area to include in your final calculation. But no one gave me the right formula! I had to find it by my self, with the help of a great 3D software called MODO (by the way, I love it!).</p>

<p>Unfortunately, I'll not explain how to calculate the normals in this first tutorial. I'll create a separated article to show you how to get the right formula to calculate the normals. The normals deserve more attention than a simple subject inside one tutorial.</p>

<p>The most valuable point here is you understand that the Normal is an unit vector and visualize how the normals work together and how they fit into our shaders' context.</p>

<p><br/><a name="smooth_angle"></a> <br />
<h3>Normal's Smooth Angle</h3><a href="#list_contents">top</a> <br />
<img src='http://db-in.com/images/shaders_bowling_example.jpg'  alt="" title="shaders_bowling_example" width="300" height="240" class="alignright size-medium wp-image-1455" />As we always make an abstraction of the real world, trying to simplify it, we have created a concept that does not exist in the real world: the Smooth Angle. Imagine this: in the real world the surfaces has infinities vertices, take for example the image of an sphere, maybe a bowling ball.</p>

<p>Try to imagine the smallest face/triangle that composes that bowling ball. Even if we try to use a microscope, we never will see a faceted area. Now take a look at our virtual spheres, even if we create a 3D mesh using a stupidly high resolution of 8 millions polygons, we'll stay very very far from the perfection of the real world. By the way, for our 3D applications and games, we must work around thousands polygons, not millions. Does that means our 3D lights will seem ugly on low meshes? Fortunately we have a solution.</p>

<p><img src='http://db-in.com/images/shaders_sphere_example.jpg'  alt="Our 3D world always will have imperfections." title="shaders_sphere_example" width="600" height="570" class="size-full wp-image-1454" /></p>

<p>This problem can be solved with the Normal's Smooth Angle. With simple words, it represents the maximum angle on which the light will looks continuous when reflected by a surface. The following picture helps us to understand this point better:</p>

<p><img src='http://db-in.com/images/shaders_smooth_angle.jpg'  alt="The Smooth Angle is the angle between two adjacent faces." title="shaders_smooth_angle" width="600" height="600" class="size-full wp-image-1456" /></p>

<p>Remember that smooth angle should be calculated when we calculate the Normals, so any post-change to the smooth angle will affect the entire Normals. I'll talk more about the smooth angle in the article of the Normals calculations.</p>

<p><br/><a name="tangent"></a> <br />
<h2><strong>Tangent Space</strong></h2><a href="#list_contents">top</a> <br />
The Tangent Space is composed by two different components, actually there are three properties, the third one is the Normal Vector. As we've talked about the normals, let's focus on the other two: Tangent and Bitangent (also known as Binormal, but the term "Binormal" is a mistake).</p>

<p>The Tangent and Bitangent are unit vectors, just as the Normal, the combination of these three components must form an Orthogonal and Orthonormal set. Before we go ahead, let me explain in simple words these two concepts:  </p>

<ul>  
    <li><strong>Orthogonal</strong>: Two vectors that are perpendicular (form an angle of 90 degrees).</li>
    <li><strong>Orthonormal</strong>: A set of vectors that are all Orthogonals and unit vectors.</li>
</ul>

<p>OK, these set of three vectors called Tangent Space is defined per-vertex. It's purpose is define a local space for each face/vertex, which will be used to interpret the surface's imperfection (bump map). The bump map (also known as normal map) is a RGB map that defines each relief of the surface.</p>

<p>It could sound confused to a text explanation. Just try to imagine this, as we always optimize everything in 3D world, the bump map is a technique that stores the surface's deformations into a single image file. The Tangent Space is a set of vectors that allow us to parse the bump informations for each fragment, independent of the mesh's rotation, position or scale. The following image illustrate the Tangent Space and its connection with the Texture Coordinates.</p>

<p><img src='http://db-in.com/images/tangent_space_example.jpg'  alt="The Tangent Space is a set of 3 vectors." title="tangent_space_example" width="600" height="600" class="size-full wp-image-1457" /></p>

<p>Basically the Tangent Vector points to where the "<strong>S</strong>" coordinate increases (S Tangent) and the Bitangent points to where the "<strong>T</strong>" coordinate increases (T Tangent).</p>

<p>It's possible to exist more than a Tangent Space for a single vertex, in this case the vertex will break into two or more vertices with the same value for position, actually there are other important concepts in Tanget Space, but I'll not bother you with the details in here. Just as the Normal's calculations, I'll let this complex part to another article dedicated to that subject. The important thing here is to you understand what is the bump maps and how the Tangent Space is important to make bump effects.</p>

<p><br/><a name="texcoord"></a> <br />
<h2><strong>Texture Coordinates</strong></h2><a href="#list_contents">top</a> <br />
This is the most common vector component for shaders. It's responsible to place an image on a mesh's surface. The Texture Coordinates (texcoord for short), is defined per-vertex. It'll be interpolated along two vertices to achieve a per-fragment result. The texcoord is usually given in the range [0.0, 1.0], representing the order [S, T], which are the normalized values from the [U, V] notation.</p>

<p>The texcoord have more to do with the artistic work than with our code, usually the 3D softwares are responsible for generating it. The texcoord will directly affects how the texture image file should be created, I mean, the image of the texture must be created based on the texcoord positions. There are some 3D softwares that accept multiple texcoord channels. It could be good for some situations which multiple designers are working together, but it's not good to the performance and optimization. There is nothing that multiple texcoord channels can make that a single channel can't. So, keep it simple, always try to work with a single texcoord channel.</p>

<p>The texcoord is very important to create the Tangent Space. Multiple texcoord channels will need multiple Tangent Spaces as well. So, multiple channels is never a good idea.</p>

<p><br/><a name="conclusion"></a> <br />
<h2><strong>Conclusion</strong></h2><a href="#list_contents">top</a> <br />
OK, my friends, I don't want to make this first tutorial too long, so these are the basic concepts about the Shader. Now you know how the shaders work, what are their limitations, where is their power and what we need to have before enter in the shaders' world.</p>

<p>In this tutorial you saw:  </p>

<ul>  
    <li>The Shaders are responsible for all the visual results of our 3D world, including lights, shadows, reflections, refractions, etc.</li>
    <li>There are four shading techniques most used: Flat Shading, Goraud Shading, Phong Shading and Bump Shading. OpenGL by default will use the Phong Shading.</li>
    <li>The values from Vertex Shader to Fragment Shader will always be interpolated.</li>
    <li>We have 3 very important per-vertex vectors: Position, Normal and Texture Coordinate.</li>
    <li>The Normal Vector + 2 others form the Tangent Space, fundamental concept to produce the Bump Shading and any other displacement technique (like the Parallax Mapping).</li>
    <li>Always try to use only one texcoord channel.</li>
</ul>

<p>Well done! <br />
My next article will not be the second part of this series, instead, it'll be a short article covering how to calculate the Normal Vector and the Tangent Space. We'll need to have those vectors very correct before enter in the real calculus inside the shaders' world.</p>

<p>If you have any doubts, just Tweet me:  </p>

<script src='http://platform.twitter.com/widgets.js'  type="text/javascript"></script>  

<p><a href='http://twitter.com/share?&amp;url=&amp;text=@dineybomfim' &amp;related=dineybomfim" class="twitter-share-button" data-related="dineybomfim" data-text="@dineybomfim " data-count="none" data-url="">Tweet</a> </p>

<p>See you soon!</p>

<iframe scrolling="no" src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  width="100%" height="130px"></iframe>]]></description><link>http://blog.db-in.com/all-about-shaders-part-1/</link><guid isPermaLink="false">c703134e-fd41-44d4-8b49-65384e57e18f</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Wed, 05 Feb 2014 09:48:16 GMT</pubDate></item><item><title><![CDATA[Nippur Transition]]></title><description><![CDATA[<p><img src='http://db-in.com/images/npptransition_icon.png'  alt="" title="Binary world" width="150" height="150" class="alignleft size-full" />Greetings!</p>

<p>It's a great pleasure to present a new part of my work. This one is a piece of a framework intended to be used in the daily job. To make many things, make your life easier and programming faster. Today I wanna show you the NPPTransition, an API to make custom animations and transitions between UIView and/or UIViewController using pure native Objective-C (aka Obj-C).</p>

<p>Just by importing it you can change all the transitions (push and modal) of your application, even without writing a single line of code. Let's talk about it.</p>

<!--more-->

<p>&nbsp;</p>

<h2><strong>At a glance</strong></h2>  

<p>First of all, it's an Open Source, here are the links to download it:</p>

<p><a style="margin: 20px 0px; display: block; background-color: #205081; border: 1px solid #76bee9; padding: 15px 10px; text-decoration: none; overflow: hidden; -moz-border-radius: 12px; -webkit-border-radius: 12px; -o-border radius: 12px; -ms-border-radius: 12px; -khtml-border-radius: 12px; border-radius: 12px; color: #fff; font-weight: bold; width: 300px;" onmousedown="_gaq.push(['_trackEvent', 'NPPTransition', 'Download', 'Zip']);" href='https://bitbucket.org/dineybomfim/npptransition/get/master.zip' ><img class="alignleft" style="border-radius: 0px; box-shadow: 0px 0px 0px;" title="download" alt="Download Xcode project files to iPhone" src='http://db-in.com/nippur/transition/img/zip_icon.png'  /> <br />
<strong>Download now</strong> <br />
Source + Sample <br />
1.4Mb <br />
</a></p>

<p><a style="color: #fff; margin: 20px 0px; display: block; background-color: #205081; border: 1px solid #76bee9; padding: 15px 10px; text-decoration: none; overflow: hidden; -moz-border-radius: 12px; -webkit-border-radius: 12px; -o-border radius: 12px; -ms-border-radius: 12px; -khtml-border-radius: 12px; border-radius: 12px; font-weight: bold; width: 300px;" onmousedown="_gaq.push(['_trackEvent', 'NPPTransition', 'Download', 'Bitbucket']);" href='https://bitbucket.org/dineybomfim/npptransition' ><img class="alignleft" style="border-radius: 0px; box-shadow: 0px 0px 0px;" title="download" alt="Download Xcode project files to iPhone" src='http://db-in.com/nippur/transition/img/bitbutcket_icon.png'  /> <br />
<strong>Bitbucket</strong> <br />
Git Project <br />
</a></p>

<p>And this is the official web page: <a title="NPPTransition" href='http://db-in.com/nippur/transition'  target="_blank">http://db-in.com/nippur/transition</a></p>

<h2><strong>What it does?</strong></h2>  

<p>Well, I prefer to show you:  </p>

<iframe src='http://player.vimeo.com/video/68448895?title=0&amp;byline=0&amp;autoplay=1&amp;loop=1'  width="301px" height="431px" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>

<p>This video is from the sample code. These transitions are happening between UIViewController and UINavigationController. Every transition can be customized, there is also a global setting. Besides, any UIView can be animated to other UIView.</p>

<h2><strong>How it works?</strong></h2>  

<p>The first important thing to understand is that it's completely "iOS version free". What exactly that means. It means it does not lie on the UIKit framework, that changes a lot from version to version (which is sucks, BTW). It completely lies on the QuartzCore and CoreGraphics frameworks, besides those frameworks are also used for desktop development (Cocoa for Mac OS). So it can easily be used for Mac OS development as well.</p>

<p>First, it creates a solid base that deals with the sizes and positions of the views. Then a "snapshot" of the views is taken as images, these images are used to make the animation/transition. At this point all the responsibility by the animation/transition is up to the subclasses. After the animation, the task returns to the main base that deals again of how to place, remove, show or hide the final views in its places.</p>

<p>By default, the NPPTransition uses a category trick to replace the normal iOS behavior, it swizzles the UINavigationController and UIViewController methods to make its magic.</p>

<p>There is just two steps to get it working on your project:  </p>

<ol>  
    <li>Import the source folder "Nippur" or the "Nippur.framework" on your project (drag and drop). Also make sure the <strong>QuartzCore</strong> and <strong>CoreGraphics</strong> frameworks are within your project.
<img src='http://db-in.com/images/quartz_core.jpg'  alt="quartz_core" width="260" height="300" class="size-medium wp-image-1504" /></li>  
    <li>In project's prefix file (.pch) write accordingly to your importing:
<img src='http://db-in.com/images/nippur_importing.jpg'  alt="nippur_importing" width="347" height="114" class="size-full wp-image-1505" /></li>  
</ol>

<p>It's done! <br />
Now just build and run your project to see what happens. If you try this into an existing of your projects you'll how different the things will goes on.</p>

<h2><strong>Using the NPPTransition</strong></h2>  

<p>Despite by default it's using a category trick to replace the transitions, you can turn this feature ON or OFF as you wish and you can do it individually just for Push or for Modal transitions. Here is how it goes:</p>

<table width="675">  
<tbody>  
<tr>  
<th>Turning ON/OFF the category feature</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">  
// To turn ON/OFF the category feature for push transitions.
[NPPTransition definePushTransitionCategoryUsage:NO];

// To turn ON/OFF the category feature for modal transitions.
[NPPTransition defineModalTransitionCategoryUsage:YES];
</pre>

<p>As it happens automatically, you can also define some other default settings:</p>

<table width="675">  
<tbody>  
<tr>  
<th>Playing with default settings</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">  
// To define the push transition class.
[NPPTransition definePushTransitionClass:[NPPTransitionFold class] direction:NPPDirectionDown];

// To define the modal transition class.
[NPPTransition defineModalTransitionClass:[NPPTransitionTurn class] direction:NPPDirectionUp];

// To define the default duration for all transitions.
[NPPTransition defineTransitionDuration:1.0f];
</pre>

<p>As you noticed, the transition happens inside a subclass. It's the one responsible for custom animations and all the basic parameters still the same because the NPPTransition abstract class has defined them.</p>

<p>Now you may ask: "Ok, but if I turn OFF the category feature, the animations will happen automatically?". No, in this case you should manually replace the push, pop, present or dismiss methods for the correlated ones of NPPTransition. But don't worry, it's very easy to find them, they start with the letters <strong>"npp"</strong></p>

<table width="675">  
<tbody>  
<tr>  
<th>Making UIKit transitions manually</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">  
// Instead of:
[myViewController pushViewController:otherController animated:YES];
// Use now:
[myViewController nppPushViewController:otherController animated:YES transition:nil];

// ---

// Instead of:
[myViewController presentViewController:otherController animated:YES completion:nil];
// Use now:
[myViewController nppPresentViewController:otherController animated:YES transition:nil];

// ---

// And so on...
</pre>

<p>If you set the transition as <strong>"nil"</strong> the default class for that kind of transition (push or modal) will be used in place. For example, following the two parts of codes above, our push transition will NPPTransitionFold and the modal transition will be NPPTransitionTurn.</p>

<p>Now, if you set a specific transition, it will be respected, regarding the specific parameters for the transition you're asking, that means, the NPPTransition properties <strong>"fromView"</strong>, <strong>"toView"</strong> and <strong>"backward"</strong> will be ignored. Because those parameters are specific to each ViewController and its current state.</p>

<table width="675">  
<tbody>  
<tr>  
<th>Making a specific transition</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">  
NPPTransitionFold *fold = [NPPTransitionFold transitionWithcompletion:nil];  
[[self presentingViewController] nppDismissViewControllerAnimated:YES transition:fold];
</pre>

<p>As you noticed, all the NPPTransition has a <strong>completion block</strong>, that means you can get a completion notification for any kind of transition (push or modal).</p>

<p>And what about minor animations, like animating two objects inside a view? <br />
It's very easy to do as well.</p>

<table width="675">  
<tbody>  
<tr>  
<th>Minor animations</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">  
// Creating a custom transition.
NPPTransitionCube *cube = [NPPTransitionCube transitionFromView:aViewInStage toView:aNewView completion:nil];  
cube.direction = NPPDirectionDown;  
cube.mode = NPPTransitionModeOverride;  
[cube perform];

// Preparing it to goes back after 2 seconds.
[cube performBackwardAfterDelay:2.0f];
</pre>

<h2><strong>Conclusion</strong></h2>  

<p>Very well my friends, that's it. There are a lot more in the sample project. Get it by downloading the zip or copying the Git project.</p>

<p>If you have any doubts, just Tweet me:  </p>

<script type="text/javascript" src='http://platform.twitter.com/widgets.js' ></script>  

<p><a class="twitter-share-button" href='http://twitter.com/share?text=@dineybomfim' &amp;related=dineybomfim" target="_blank" data-related="dineybomfim" data-text="@dineybomfim " data-count="none" data-url="">Tweet to @dineybomfim</a></p>

<p>See you soon!</p>

<iframe src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  height="130" width="100%" scrolling="no"></iframe>]]></description><link>http://blog.db-in.com/nippur-transition/</link><guid isPermaLink="false">c342a7a6-ccac-4a84-bb82-f80062b34638</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Wed, 05 Feb 2014 09:39:47 GMT</pubDate></item><item><title><![CDATA[NinevehGL Features]]></title><description><![CDATA[<p><img src='http://db-in.com/images/ngl_feather.png'  alt="" title="ngl_feather" width="130" height="200" class="alignleft size-full wp-image-1390" />Hello everyone!</p>

<p>Today I want to talk about the NinevehGL. Talk more about the features and about what it can offer to us. NinevehGL is almost done, I want to be as fast as possible, but I don't want to launch it without everything seems great: the documentation, the tutorials, the official website and obviously, the NinevehGL itself.</p>

<p>Soon I'll start posting videos and many divulgations about it, but right now in this article let's see some images taken directly from the NinevehGL running.</p>

<p>Let's start!</p>

<!--more-->  

<p><br/>  </p>

<h2><strong>At a glance</strong></h2>  

<p>The NinevehGL is a 3D engine fully made with the most pure Objective-C, right on top of Cocoa Touch Framework. So at the first moment, the NinevehGL is a 3D engine only for iOS. I intend to port NinevehGL to MacOSX (desktops with Obj-C) and also for ActionScript 3.0, using the new great API code named Molehill. ActionScript is my old programming language, today I don't hold much lover for it, but I know that there are many people that still loving Flash. Another important step is to port NinevehGL to JavaScript to work with WebGL. But at the beginning, NinevehGL will work only with OpenGL ES 2.x for iOS.</p>

<p>Now the most important question: "Why NinevehGL instead of PowerVR, Oolong, SIO2, Torque, Ogre, UDK, Unity, Wolfenstein, Shiva, Galaxy, or any other? Why should I choose NinevehGL instead any other 3D engine?".</p>

<p>I always tried to convince everyone to buy what I was selling, trying to convince everybody from my point of view. But not this time, I don't want to sell anything, I don't want to convince you. If you are using some other engine and glad with that, just continue with it, you don't have to change. Also, if you are a great OpenGL programmer and have your very own model and structure, continue with it, don't change anything. Because, again, I don't want to convince anyone at this time. Just as OpenGL, I'll focus on the most important thing to me: development!</p>

<p>I took everything that I've seen of worst in other engines and I've tried to make it better in NinevehGL. I took all my frustations with the actual engines, all my luggage as developer, all my knowledge about 3D world and put all of them in the NinevehGL. So, if you are like me and you have some insatisfaction with the other engines, for you, NinevehGL could really nice for you! I know I'm telling this every time, but I love this concept:"Keep it simple!" This is the NinevehGL's main principle, everything was made to be simple. There are no commands that need more than one single line of code. If you want to make one thing, you need one line, nothing else.</p>

<p>Now I'll describe in details the most important features of NinevehGL. Remember, I don't want to convince you of anything. I'm sure you are an advanced programmer and can form your own opinion about what's coming next. If you like it, great! If don't, it's great as well! Just tell me why and I'll try to make better.</p>

<p>Here is a features' list.  </p>

<table width="675">  
<tr>  
<th>NinevehGL Features</th>  
</tr>  
<tr><td>Imports OBJ file.</td></tr>  
<tr><td>Imports DAE (COLLADA) file.</td></tr>  
<tr><td>Works with NGL binary file (exclusive of NinevehGL).</td></tr>  
<tr><td>Cache 3D files to optimize the loading in next times (around 95% less loading time).</td></tr>  
<tr><td>Works with OpenGL ES 2.x (programmable pipeline).</td></tr>  
<tr><td>Full integrations with custom Shaders.</td></tr>  
<tr><td>Supports 3D and 2D application.</td></tr>  
<tr><td>Programming interface is 100% User Friendly and totally independent of OpenGL version.</td></tr>  
<tr><td>Absolutely oriented to performance and minimum size (NinevehGL is 10% - 60% faster than any other engine).</td></tr>  
<tr><td>Full support for PVRTC (even to without header generated by MacOS).</td></tr>  
<tr><td>Automatically calculates normals and tangent space to work with lights and bump maps.</td></tr>  
</table>

<p><br/><a name="importing"></a>  </p>

<h2><strong>Importing 3D files</strong></h2>  

<p>This is the most problematic topic to me considering other engines. As OpenGL doesn't import 3D files directly, you must use an engine that imports a file good for you or use one of the files accepted by that engine, like POD to PowerVR engine.</p>

<p>This is the first problem that NinevehGL works to solve, it can import the two most popular 3D file formats: COLLADA and WaveFront OBJ. You know what a mess the 3D imports with 3D softwares are. Each software has its own file format and implements its importing routines at its own way. So it's not hard to export an OBJ from 3DS Max and when you try to import in Maya, BOOM, error! To tell the truth, the incompatibility with the files when exporting from a 3D software to another is common.</p>

<p>The NinevehGL has a full importing routine. What that means? If your OBJ file has informations about the specular, the bump map, the reflection map or anything supported by OBJ file, NinevehGL will import and parse everything for you with no losses. The same is true for COLLADA files. <br />
<img src='http://db-in.com/images/importing_example.jpg'  alt="NinevehGL supports OBJ and DAE files." title="importing_example" width="600" height="600" class="size-full wp-image-1383" /></p>

<p><br/><a name="caching"></a>  </p>

<h2><strong>Caching 3D files</strong></h2>  

<p>This is the greatest feature in my opinion. NinevehGL works with a third kind of file, the NGL binary file. It's a binary file containing a full 3D model with all informations necessary to NinevehGL and also OpenGL. This file is loaded through streaming, so it's extremely fast. There are no 3D softwares that can export it, it's exclusive from NinevehGL. So how you can use it? There are two ways: First, you can convert your OBJ or COLLADA file on-line! Yes, you read it right, on-line convertion! No plugins, no installations, no complex things, you can generate NGL file on-line! Just access the NinevehGL website, choose the files on your machine and hit the button!</p>

<p>The other way, is the most fascinating one. NGL file is generated automatically by NinevehGL at the first time you load a new file from OBJ or COLLADA. This file will be stored locally in the device under the folder <em><Application_Home>/Library/NinevehGL</em>. What is great here, is that this folder is fully backed up by iTunes. So imagine this:  </p>

<ol>  
    <li>You make an APP, which loads OBJ and COLLADA files.</li>
    <li>An user make the download of your APP from the APP Store.</li>
    <li>The user runs your APP at the first time, the NinevehGL loads around 20 3D file in 12 secs, let's suppose.</li>
    <li>The next times that this user runs your APP, the loading time will be 0.12 secs, for example.</li>
</ol>

<p>What happened??? <br />
It's great! At the second run, the NinevehGL automatically identified  that those 20 3D files were already parsed before, so it will use the cached NGL binary files instead the original ones!</p>

<p>But this feature doesn't stop here! Remember, NinevehGL was made to make everything be simple! So, let's suppose your application loads the 3D files through the internet connection instead loading it locally. No problems! NinevehGL compares the modification date of the files and choose the most recent. Imagine this:  </p>

<ol>  
    <li>Your APP loads the 3D files from the internet, because you want to make updates without submeting a new APP to APP Store.</li>
    <li>An User downloads your APP. First run, 20 3D files in 12 secs.</li>
    <li>In the next times, NinevehGL will use NGL binary files, loading them in 0.12 secs.</li>
    <li>You make an update and upload some new 3D files.</li>
    <li>All the users that have your APP will receive the update, with 10 new 3D files replacing the old ones, so NinevehGL will parse the new downloaded files in 10 secs and it'll generate new NGL binary files.</li>
    <li>In the next runs of your application, NinevehGL will use the new generated NGL files and the loads will happen in 0.08 secs.</li>
</ol>

<p>I cry of happiness every time I explain this routine. It's amazing, wonderful, outstanding, great, it's... it's...  "magical"! Well, this is just my opinion about it. But I'm sure you can imagine how great would be to the final user: "At the first run, a loading of few seconds, next times, no loading? Where is the loading of this APP?". The loading happens in a snap.</p>

<p><img src='http://db-in.com/images/importing_ngl_example.jpg'  alt="NinevehGL automatically saves NGL binary files locally." title="importing_ngl_example" width="600" height="600" class="size-full wp-image-1384" /></p>

<p>The last thing I want to say about this feature is the optimization. Again, the NinevehGL was made to be EXTREMELY FAST! So, if you load a NGL binary file directly, it will not create a cache file to it, because it is already optimized, it'll just create cache from non optimized files (OBJ and COLLADA). Now you know how to convert your 3D models to NGL binary file, you can convert them on-line, at the official NinevehGL website, or can just run your application in the simulator and take the binary file generated at the first run and find the file at <em><Application_Home>/Library/NinevehGL</em>!</p>

<p>In order to avoid confusion, it's important to say that NinevehGL will not be responsible to manage the internet connection for the file downloading. So you can't inform an URL to load a 3D file. What the NinevehGL will make is manage the files locally.</p>

<p><br/><a name="shaders"></a>  </p>

<h2><strong>Custom Shaders</strong></h2>  

<p>The NinevehGL was made to work with OpenGL programmable pipeline. So obviously it'll create its own shaders, based on the materials (loaded from 3D files or materials created directly in the code). Well, the most fun part of the programmable pipeline is exactly the shaders, so will NinevehGL block us of using our own shaders? Of course NOT! NinevehGL is absolutely user friendly, it's flexible!</p>

<p>The other engines working with programmable pipeline are used to give to us only two choices. Working only with your very own shaders or working only with their shaders! Well, this is not a choice, but rather it's a dilema! But, the NinevehGL offers another way, a third way: "all of above"! Yes, your read it right, NinevehGL has an API Shader capable to integrate two or more shaders into only one program. WOW, what that means?</p>

<p>That means you can create your own shaders, just as you are used to, implementing your own lights effects, materials or anything else you want. Then you pass your shaders to the NinevehGL and it'll integrate your custom shaders with the shaders generated by the NinevehGL materials. No conflicts, everything works fine together!</p>

<p>This works for Vertex Shader and/or Fragment Shader. An important thing to say is that there are no special constraints, you can even use a shader that you already have, with many variables inside, many functions and the main function. The NinevehGL Shader API will interpretate the code inside the shaders and will fuse all together.</p>

<p><img src='http://db-in.com/images/importing_shaders_example.jpg'  alt="NinevehGL can merge your custom shader with its own shaders." title="importing_shaders_example" width="600" height="326" class="size-full wp-image-1385" /></p>

<p><br/><a name="tangent_space"></a>  </p>

<h2><strong>Calculating Normals and Tangent Space</strong></h2>  

<p>This is the robust side of NinevehGL. It's very fast even in the hard processing scopes, like calculations of normals and tangent space for imported models. Normals and tangent space are very important to generate real time lights, reflections, bump effects and many other things. NinevehGL automatically calculates the normals and generates the tangent space for each mesh that needs it.</p>

<p><img src='http://db-in.com/images/tangent_space_example.jpg'  alt="NinevehGL automatically generates normals and tangent space, if needed." title="tangent_space_example" width="600" height="600" class="size-full wp-image-1386" /></p>

<p>NinevehGL can automatically generate some effects, like specular lights, ambient light, emissive light, bump map, reflections and other ones specified by NinevehGL materials. NinevehGL doesn't impose any limitations to you, it just works around the limits of the OpenGL. So, in the iOS, for example, you can work with very high mesh models. Obviously this is not a common situation, but sometimes it could be better to see high mesh models on the screen. Even to these high meshes, NinevehGL will continue to produce the normals, tangent space, specular lights, bump maps, reflections and everything else you need.</p>

<p><img src='http://db-in.com/images/specular_light_example.jpg'  alt="NinevehGL also works with very high mesh models." title="specular_light_example" width="600" height="600" class="size-full wp-image-1388" /></p>

<p>Now I want to show you some other features that are also great, but I'll not talk in deep here.</p>

<p><br/><a name="textures"></a>  </p>

<h2><strong>Optimizing Textures</strong></h2>  

<p>Just to keep it fresh on our minds, NinevehGL was made to operate in the maximum performance. So even the loaded textures will be optimized. If you choose to work with an opaque OpenGL Layer (the default), the textures could be optimized to the format RGB<em>5</em>6<em>5, which is the best choice without alpha channel. If you choose to work with transparent layer, the textures could be optimized to the format RGBA</em>4<em>4</em>4_4.</p>

<p>The optimization process will happens automatically and you don't need to worry about it. Even to PVRTC compressed texture format, these optimization could happen, if needed to. The textures, as any other external file, can be loaded from any path locally and the NinevehGL is ready to manage the local paths to you.</p>

<p>Plus, just as the 3D files, textures has a kind of "cache", that means if you load an image more than one time, the NinevehGL will manage the load locally and will not spend more memory with the same cached image. This is a very important optimization, mostly in cases when a 3D file use the same image for many things, like to the ambient, diffuse and specular maps.</p>

<p><br/><a name="more"></a>  </p>

<h2><strong>Many other things</strong></h2>  

<p>There are in NinevehGL many other important features, but I'll just point it because other engines also have these features, so it's not that exclusive. Like:  </p>

<ul>  
    <li>Cameras with projections (perspective and orthographic).</li>
    <li>Customizations of OpenGL behaviors, like the render buffers.</li>
    <li>Simple API to make 3D transformations (like obj.rotateX += 1.0).</li>
    <li>Always use OpenGL optimization features, as Buffer Objects, array of structures, optimized texture formats, etc.</li>
</ul>

<p>One great thing to talk a little bit more is about the auto-corrections. It's very common that the 3D modelers create their models with different scales, I mean, a 3D file which contains a spoon could has the range of vertices going through -1000.0 to 1000.0 and other file which contains a house with the range of -0.5 to 0.7, for example. By placing both files in a 3D softwares you probably will see a house at the side of the empire state in format of a spoon.</p>

<p>For a 3D softwares this is not a big deal, because there are an infinity work space and you can simply reorganize your objects visually. But for a programming language, this is a problem and gets even bigger if the object is so big or so far that you can't even see it on the screen.</p>

<p>The NinevehGL auto-correction can solve this problem for you, because it can normalize the vertices positions to fit on the screen (or in a specific range you want). By doing so you can control the size of your models without having to export it again from the 3D software. So in the case of the house and the spoon you could set the auto-correction to fit the house at the range -10.0 to 10.0 and the spoon to -0.1 and 0.1, simple as that (one line of code).</p>

<p><br/><a name="outside"></a>  </p>

<h2><strong>Outside the Scope</strong></h2>  

<p>What you should not expect from NinevehGL? Here is a list:  </p>

<ul>  
    <li>The NinevehGL is not a game engine, it's a 3D engine.</li>
    <li>So you shouldn't expect physics controllers or sound controllers. You should construct this kind of stuff by yourself.</li>
    <li>No animation, YET! Animations is very important in the 3D world, but the first version of NinevehGL will not come with this feature. (Animations = Character Rigging + Bones)</li>
    <li>No collisions, YET! There are two techniques to deal with collision, bounding box and bounding mesh. In this version, NinevehGL will not implement any of those.</li>
</ul>

<p>Obviously any item in this list can change, because the NinevehGL is flexible. Consider the above list as an "out of scope" list only for the first version of the NinevehGL.</p>

<p><br/><a name="conclusion"></a>  </p>

<h2><strong>Conclusion</strong></h2>  

<p>I'm sure you have now a solid opinion about NinevehGL, good or not, so let me know what you think, post a comment bellow, send me an email, tweet me... anything.</p>

<p>Again, I don't want to make promisses, but I really want to see this engine released soon. So I'm sure it will come in the first half of this year (2011). I'm just finishing some details now and organizing everything for the first release.</p>

<p>Thanks for reading,</p>

<p>And see you very soon!</p>

<p><img src='http://db-in.com/images/ninevehgl_featured.jpg'  alt="NinevehGL will come in the first half of 2011." title="ninevehgl_featured" width="600" height="475" class="size-full wp-image-1382" /></p>

<iframe scrolling="no" src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  width="100%" height="130px"></iframe>]]></description><link>http://blog.db-in.com/ninevehgl-features/</link><guid isPermaLink="false">176d1dc9-3c22-4f25-8d65-20f12bb31294</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Wed, 05 Feb 2014 09:37:13 GMT</pubDate></item><item><title><![CDATA[NinevehGL is HERE!]]></title><description><![CDATA[<p><img src='http://db-in.com/images/ngl_divulgation.jpg'  alt="" title="NinevehGL is here" width="200" height="200" class="alignright size-full" />Hello my friends,</p>

<p>After long months, just waiting for a single day, was a long wait, I know, but today the things will change a little bit. Today is THAT day.</p>

<p>After working through the nights, polishing every piece of code, thinking and rethinking routines... finally IT is here!</p>

I'm very very happy to announce that NinevehGL is HERE!  
<!--more-->  

<p><br/>  </p>

<h2><strong>Ladies and Gentlemen</strong></h2>  

<p>With a great pleasure, let me introduce you the <a href='http://nineveh.gl/' >NinevehGL</a>! <br />
<br/> <br />
<img src='http://db-in.com/images/ngl_divulgation.jpg'  alt="" title="ngl_divulgation" width="670" height="501" class="aligncenter size-full wp-image-1438" /></p>

<p><br/>  </p>

<h2><strong>Keep it Simple</strong></h2>  

<p>You know me, I'm so bored with many web sites, softwares, 3D engines and technologies that make our life a hell with their docs (or the absence of docs), poor tutorials, complex setups, complex API or even worst... they are paid! God Dammit!!!  </p>

<ul>  
    <li>So, let's try something different, something simple! Starting with the web site: <a href='http://nineveh.gl/' >nineveh.gl</a>, just it, simple and easy.</li>
    <li>What about tutorials or docs? Very simple: <a href='http://nineveh.gl/docs/tutorials/' >http://nineveh.gl/docs/tutorials/</a>, video tutorials!</li>
    <li>Long time to learn? Maybe… what do you think about just 30 min? 10 videos, 3 min each one. Sounds good?</li>
</ul>

<p><br/>  </p>

<h2><strong>Awesome features</strong></h2>  

<p>There are many cool features in NinevehGL. But I think 3 are the "Killer Features":  </p>

<ul>  
    <li>OpenGL ES 2.0 (Programmable Pipeline): It uses the newest OpenGL ES version. More power, faster, lighter, better and shaders! With NinevehGL you can use all the power of the shaders and programmable pipeline.</li>
    <li>Import directly from 3D softwares! NinevehGL doesn't need plugins or special 3D formats to import your files! Make use of Wavefront OBJ files or COLLADA files (all 3D softwares export one of them). NinevehGL is ready to import them.</li>
    <li>Made with pure Objective-C (Obj-C)! Yes, as an iOS developer, when I use OpenGL I expect to see Obj-C code, not C++ or C. NinevehGL is purely made with the native iOS language. Classes and routines follow all the Apple/Cocoa Touch guidelines.</li>
</ul>

<p>Well, another great thing about NinevehGL is that it's FREE! A 3D engine for iOS totally FREE!</p>

<p>This is a short post just to tell you about this great news. I hope you like it.</p>

<p>See you, guys!</p>

<iframe scrolling="no" src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  width="100%" height="130px"></iframe>]]></description><link>http://blog.db-in.com/ninevehgl-is-here/</link><guid isPermaLink="false">b90d8361-06df-4e56-b8f9-243825ca8398</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Wed, 05 Feb 2014 09:36:11 GMT</pubDate></item><item><title><![CDATA[Calculating Normals and Tangent Space]]></title><description><![CDATA[<p><img src='http://db-in.com/images/vertex_normal_featured.jpg'  alt="" title="Binary world" width="200" height="200" class="alignleft size-full" />Hi guys!</p>

<p>In this article I'll show how to calculate per-vertex Normals and the Tangent Space. Here you'll see the most accurate technique, generating real Normals and breaking the vertex when necessary. This article is an intermediate part of the "All about Shaders" series. Would be nice if you've read the first part <a href='http://blog.db-in.com/all-about-shaders-part-13/'  title="All about Shaders – (part 1/3)" target="_blank">All about Shaders – (part 1/3)</a></p>

<!--more-->

<p><br/>  </p>

<h2><strong>At a glance</strong></h2>  

<p>First off, the calculations and routines that we'll create here is not an easy task, there are complexes concepts and calculations involved here. So, be sure you have this macro view:  </p>

<ul>  
    <li>Usually the 3D softwares will export an Optimized Per-Vertex Normal, for those cases we can save our time, avoiding re-create the Normals. So we'll create the Normal Vector ONLY when the Normals from 3D file was not optimized or don't exist.</li>
    <li>Non optimized Normals means that the same Normal vector was written many times by the 3D software. It happens in some 3D file formats, like COLLADA, that the same Normal vector can appear hundred times, making the parse processing very expensive.</li>
    <li>It's a good idea to calculate the Tangent Space always as possible (all that we need is Vertex Position and Vertex Texcoord).</li>
    <li>The best way to deal with meshes using OpenGL is to use what we call "Array of Structures", however I'll show you a generice algorithym that can be used with "Structure of Arrays" as well.</li>
</ul>

<p>If these four bullets sounds like "Greek*" for you, I highly recommend you read some other articles before proceed:  </p>

<ul>  
    <li><a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-1'  title="All about OpenGL ES 2.x – (part 1/3)" target="_blank">All about OpenGL ES 2.x – (part 1/3)</a></li>
    <li><a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-2'  title="All about OpenGL ES 2.x – (part 2/3)" target="_blank">All about OpenGL ES 2.x – (part 2/3)</a></li>
    <li><a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-3'  title="All about OpenGL ES 2.x – (part 3/3)" target="_blank">All about OpenGL ES 2.x – (part 3/3)</a></li>
</ul>  

<p>(* well, if you are a greek guy, sorry for that and please, take this word as equivalent to "chinese" or something like that).</p>

<p>As I explained before, to calculate the Normals we just need the vertex position, but to calculate the Tangent Space we'll need the texcoord already calculated. If the mesh we are working on doesn't have texcoord we'll skip the Tangent Space phase, because is not possible to create an arbitrary UV Map in the code, UV Maps are design dependents and change the way as the texture is made.</p>

<p>Hands at work!</p>

<p><br/>  </p>

<h2><strong>Calculating Normals - Step 1</strong></h2>  

<p>In theory, the technique to calculate the face normals is simple: We'll find the perpendicular vector for each face (triangle). However as you saw in the first tutorial of "<a href='http://blog.db-in.com/all-about-shaders-part-1/' #shading_types" target="_blank">All About Shaders</a>", the face normals are not so good. So, we need to calculate the vertex normals.</p>

<p>The things become a little bit more complex when we try to calculate the vertex normals. Every Face Normal will affect the vertex that compose the face. One single vertex can be shared by multiple faces, so, the final Vertex Normal will be the averaged vector of each Face that share this Vertex. As each face has its own size, the averaged Vertex Normal vector should consider those differences.</p>

<p><img src='http://db-in.com/images/adjacent_normal_example.jpg'  alt="Vertices shared by multiple faces will have the resulting Normal as an average of all adjacent faces&#039; Normals." title="adjacent_normal_example" width="600" height="500" class="size-full wp-image-1461" /></p>

<p><img src='http://db-in.com/images/teapot_strange1.jpg'  alt="" title="teapot_strange" width="300" height="203" class="alignleft size-full wp-image-1462" />Just with this concept, you can create the Normals, but they will seem strange in some meshes. Like this teapot at the left side. When I created my first Normals, I spent weeks trying to find what was wrong with my calculus or with this concept...  "Everything is OK, but only that fucking vertex is not. Why?", I thought. Many others said that was a problem with my code, a problem with my memory allocation, and all other kinds of shits. But only after look closely to a great 3D software I found the problem. I'll show you the same image that I spent hours looking to until I find the solution.</p>

<p><img src='http://db-in.com/images/vertex_normal_example1.jpg'  alt="Vertex Normals" title="vertex_normal_example" width="600" height="463" class="size-full wp-image-1463" /></p>

<p>Did you notice something strange? This is a basic teapot mesh, many 3D softwares give you this to make tests with materials and lights. But this mesh has ONE single strange vertex: a vertex with 2 Normal Vectors. Is it possible? Actually, NO. The mesh structure must follow a pattern, so you can't have all vertices with 1 Normal and only one vertex with 2 Normals. This is a very important point that no one talk about it, actually, I never seen anyone talking about this.</p>

<p>It's time to understand what happens. Your mesh structure is not complete until you calculate the Normals. Why? Because some vertices will be "break/split" into two or more vertices by the Normals. This is what happens in that image. The 3D softwares will not show you this, but there are two vertices, with the same position and texcoords, but with different Normals. Obviously the 3D softwares prefer to omit this for performance reasons, but we can't omit this fact to OpenGL. We must inform to the Shaders that there are two vertices instead of a single one.</p>

<p>And how will we know where to break a vertex? By the angle between faces. WOW, this thing of Normals is becoming very complex! Yeah, I told you that this is a very important and complex part. Let's think by steps. Usually the light looks continuous on a surface until an angle of ~80º (like a sphere), however two faces with an angle > ~80º will form a hard edge, like the table's corners. OK, now translating to English, this is what we'll do:  </p>

<ol>  
    <li>Calculate each face's Normal.</li>
    <li>Calculate the angle between face's Normals.</li>
    <li>At each group of ~80º we'll create a new set of Normals.</li>
    <li>Finally we'll find the averaged Normal for each group, respecting the face's size.</li>
</ol>

<p>Well, now it looks more simple, with just 4 simple steps. Nice! OK, let's put our hands on code. First off, let's create our Vector/Math functions.</p>

<table width="675">  
<tbody>  
<tr>  
<th>Vector Math</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">  
// Early in definitions...
typedef struct  
{
    float x;
    float y;
} vec2;

// Vector's distance.
static inline vec2 vec2Subtract(vec2 vecA, vec2 vecB)  
{
    return (vec2){vecA.x - vecB.x, vecA.y - vecB.y};
}

typedef struct  
{
    float x;
    float y;
    float z;
} vec3;

static const vec3 kvec3Zero = {0.0f, 0.0f, 0.0f};

// Vector's length.
static inline float vec3Length(vec3 vec)  
{
    // Square root.
    return sqrtf(vec.x * vec.x + vec.y * vec.y + vec.z * vec.z);
}

// Vector's normalization.
static inline vec3 vec3Normalize(vec3 vec)  
{
    // Find the magnitude/length. This variable is called inverse magnitude (iMag)
    // because instead divide each element by this magnitude, let's do multiplication, it's faster.
    float iMag = vec3Length(vec);

    // Avoid divisions by 0.
    if (iMag != 0.0f)
    {
        iMag = 1.0f / iMag;

        vec.x *= iMag;
        vec.y *= iMag;
        vec.z *= iMag;
    }

    return vec;
}

// Vector's sum.
static inline vec3 vec3Add(vec3 vecA, vec3 vecB)  
{
    return (vec3){vecA.x + vecB.x, vecA.y + vecB.y, vecA.z + vecB.z};
}

// Vector's distance.
static inline vec3 vec3Subtract(vec3 vecA, vec3 vecB)  
{
    return (vec3){vecA.x - vecB.x, vecA.y - vecB.y, vecA.z - vecB.z};
}

// Checks for zero values.
vec3IsZero(vec3 vec)  
{
    return (vec.x == 0.0f && vec.y == 0.0f && vec.z == 0.0f);
}

// The dot product returns the cosine of the angle formed by two vectors.
static float vec3Dot(vec3 vecA, vec3 vecB)  
{
    return vecA.x * vecB.x + vecA.y * vecB.y + vecA.z * vecB.z;
}

// The cross product returns an orthogonal vector with the other two,
// that means, the new vector is mutually perpendicular to the other two.
static vec3 vec3Cross(vec3 vecA, vec3 vecB)  
{
    vec3 vec;

    vec.x = vecA.y * vecB.z - vecA.z * vecB.y;
    vec.y = vecA.z * vecB.x - vecA.x * vecB.z;
    vec.z = vecA.x * vecB.y - vecA.y * vecB.x;

    return vec;
}

// Checks if there is a NaN value inside the informed vector.
// If a NaN value is found, it's changed to 0.0f (zero).
static vec3 vec3Cleared(vec3 vec)  
{
    vec3 cleared;
    cleared.x = (vec.x != vec.x) ? 0.0f : vec.x;
    cleared.y = (vec.y != vec.y) ? 0.0f : vec.y;
    cleared.z = (vec.z != vec.z) ? 0.0f : vec.z;

    return cleared;
}
</pre>

<p>Now we're ready to go further:</p>

<table width="675">  
<tbody>  
<tr>  
<th>Calculating Face Normals</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">  
// Private variables...
// Assumes that all the variables that start with "_" is a private one
// and you must implement it by your self and some of those values must
// be present before you start:

// vec3 *_vertices    "array of vertices positions"   (required)
// vec3 *_texcoords   "array of texture coordinates"  (optional)
// vec3 *_normals     "array of normals"              (optional/to calculate)
// vec3 *_tangents    "array of tangents"             (to calculate)
// vec3 *_bitangents  "array of bitangents"           (to calculate)

// int  _vCount       "vertices count"                (required)
// int  _tCount       "texture coordinates count"     (optional)
// int  _nCount       "normals count"                 (optional/to calculate)
// int  _taCount      "tangents count"                (to calculate)
// int  _biCount      "bitangents count"              (to calculate)

// int *_faces        "array of face indices"         (required)
// int  _facesCount   "faces indices count"           (required)
// int  _facesStride  "stride of faces indices"       (required)

// Checks the crease angle for the normal calculations.
// This function creates and divides the normals for a vertex, recursively.
static unsigned int creaseAngle(unsigned int index, vec3 vector, vec3 **buffer, unsigned int *count, NSMutableDictionary *list)  
{
    // Let's talk about this function later on.
}

// Calculating the Tangent Space.
void calculateTangentSpace()  
{
    unsigned int i, length;
    unsigned int j, lengthJ;

    unsigned int *newFaces, *outFaces;
    unsigned int oldFaceStride = _facesStride;

    int i1, i2, i3;
    int vi1, vi2, vi3;
    int ti1, ti2, ti3;

    vec3 vA, vB, vC;
    vec2 tA, tB, tC;
    vec3 distBA, distCA;
    vec2 tdistBA, tdistCA;

    vec3 normal;
    vec3 tangent;
    vec3 bitangent;

    vec3 *normalBuffer;
    vec3 *tangentBuffer;
    vec3 *bitangentBuffer;

    NSMutableDictionary *multiples;

    Element *element;
    int vLength, vOffset;
    int nLength, nOffset;
    int tLength, tOffset;

    float area, delta;
    float *outValue;

    // Checks if the parsed mesh has Normals and Texture Coordinates.
    BOOL hasNormals = NO;// CUSTOM: Check if your mesh structure already has normals.
    BOOL hasTextures = YES;// CUSTOM: Check if your mesh structure already has texcoords.

    // Gets the vertex element.
    vLength = 4;// CUSTOM: Take the length of your vertex position component.
    vOffset = 0;// CUSTOM: Take the offset  of your vertex position component in the Array of Structures.

    // If the normal element doesn't exist yet, creates a new one.
    if (!hasNormals)
    {
        // CUSTOM: Create the Normal elements if it doesn't exist yet.
        // The element should be just as the vertex one, having a length and an offset.
        // IMPORTANT: Increate the stride of your Array of Structures (_facesStride).
    }

    // Gets the normal element.
    nLength = 3;// CUSTOM: Take the normal length.
    nOffset = 7;// CUSTOM: Take the normal offset. In this case (7) we consider it's after the texcoord.

    // If the texture coordinate element exist, gets it and create tangent and bitangent element.
    if (hasTextures)
    {
        tLength = 3;// CUSTOM: Take the texcoord length.
        tOffset = 4;// CUSTOM: Take the texcoord offset. In this case it's after the vertices positions.

        // CUSTOM: Here you should create the Tangent and Bitangent elements. Just as any
        // other element until here, they must have length and offset.
        // IMPORTANT: Increate the stride of your Array of Structures (_facesStride).
    }

    // Allocates memory to the new faces.
    newFaces = malloc(sizeof(int) * (_facesCount * _facesStride));

    // A priori, assumes that for each vertex exists only one normal.
    _nCount = _taCount = _biCount = _vCount;

    // Initializes the buffers for the tangent space elements.
    // This memory allocation must use calloc because they must be 0 (zero) value,
    // otherwise NaN values can be generated.
    normalBuffer = calloc(_nCount, sizeof(vec3));
    tangentBuffer = calloc(_taCount, sizeof(vec3));
    bitangentBuffer = calloc(_biCount, sizeof(vec3));

    // Initializes the dictionaries to deal with vertices with multiple normals in it.
    multiples = [[NSMutableDictionary alloc] init];

    // Loop through each triangle.
    length = _facesCount;
    for (i = 0; i < length; i += 3)
    {
        // Triangle Vertices. At this moment _faces is an ordered list of elements' indices:
        // iv1, it1, in1, iv2, in2, it2, iv3, it3, in3...
        //  |              |              |
        //  V              V              V
        // iv1,           iv2,           iv3
        // So the following lines will extract the indices of vertices that form a triangle.
        i1 = _faces[i * oldFaceStride + vOffset];
        i2 = _faces[(i + 1) * oldFaceStride + vOffset];
        i3 = _faces[(i + 2) * oldFaceStride + vOffset];

        // Calculates the vertex indices in the array of vertices.
        vi1 = i1 * vLength;
        vi2 = i2 * vLength;
        vi3 = i3 * vLength;

        // Retrieves 3 vertices from the array of vertices.
        vA = (vec3){_vertices[vi1], _vertices[vi1 + 1], _vertices[vi1 + 2]};
        vB = (vec3){_vertices[vi2], _vertices[vi2 + 1], _vertices[vi2 + 2]};
        vC = (vec3){_vertices[vi3], _vertices[vi3 + 1], _vertices[vi3 + 2]};

        // Calculates the vector of the edges, the distance between the vertices.
        distBA = vec3Subtract(vB, vA);
        distCA = vec3Subtract(vC, vA);

        //*************************
        //  Normals
        //*************************
        if (!hasNormals)
        {
            // Calculates the face normal to the current triangle.
            normal = vec3Cross(distBA, distCA);

            // Searches for crease angles considering the adjacent triangles.
            // This function also initialize new blocks of memory to the buffer, setting them to 0 (zero).
            i1 = creaseAngle(i1, normal, &normalBuffer, &_nCount, multiples);
            i2 = creaseAngle(i2, normal, &normalBuffer, &_nCount, multiples);
            i3 = creaseAngle(i3, normal, &normalBuffer, &_nCount, multiples);

            // Averages the new normal vector with the oldest buffered.
            normalBuffer[i1] = vec3Add(normal, normalBuffer[i1]);
            normalBuffer[i2] = vec3Add(normal, normalBuffer[i2]);
            normalBuffer[i3] = vec3Add(normal, normalBuffer[i3]);
        }
        else
        {
            // If the parsed file has normals in it, retrieves their indices in the array of normals.
            vi1 = _faces[i * oldFaceStride + nOffset] * nLength;
            vi2 = _faces[(i + 1) * oldFaceStride + nOffset] * nLength;
            vi3 = _faces[(i + 2) * oldFaceStride + nOffset] * nLength;

            // Retrieves the normals.
            vA = (vec3){_normals[vi1], _normals[vi1 + 1], _normals[vi1 + 2]};
            vB = (vec3){_normals[vi2], _normals[vi2 + 1], _normals[vi2 + 2]};
            vC = (vec3){_normals[vi3], _normals[vi3 + 1], _normals[vi3 + 2]};

            // Calculates the face normal to the current triangle.
            normal = vec3Add(vec3Add(vA, vB), vC);
        }

// CONTINUE...
</pre>

<p>Let's understand all those lines:</p>

<ul>  
    <li><strong>Lines 1-16</strong>: Definition of the output variables.</li>
    <li><strong>Lines 20-23</strong>: A very important function to calculate the crease angle between faces. We'll discuss more about it later on.</li>
    <li><strong>Lines 28-59</strong>: Defining the variables we'll use to calculate the Normals and the Tangent Space.</li>
    <li><strong>Lines 62-63</strong>: CUSTOM CODE. Checks if there are Normals and Texture Coordinates already calculated in your mesh structure.</li>
    <li><strong>Lines 66-67</strong>: CUSTOM CODE. Gets the length of each Vertex set (usually 3 or 4) and its offset in the Array of Faces.</li>
    <li><strong>Lines 70-75</strong>: CUSTOM CODE. If there is no previous calculated Normals, create a new element and define its length (usually 3) and its offset in the Array of Faces.</li>
    <li><strong>Lines 78-79</strong>: CUSTOM CODE. Gets the length of each Normal set (usually 3) and its offset in the Array of Faces.</li>
    <li><strong>Lines 82-90</strong>: CUSTOM CODE. If there is a calculated Texture Coordinates, then create the Tangent and Bitangent elements. As both Tangent and Bitangent will follow the same elements rule, so you just need to define the length and offset in face for one of them.</li>
    <li><strong>Lines 93-106</strong>: Allocating the necessary memory for the calculations.</li>
    <li><strong>Lines 109-110</strong>: Starting the loop through all triangles of your mesh.</li>
    <li><strong>Lines 118-120</strong>: Getting the indices form the current working triangle.</li>
    <li><strong>Lines 123-130</strong>: Getting the vertices related to the current triangle.</li>
    <li><strong>Lines 133-134</strong>: Calculating the distance between triangle's vertices. Depending on the order which you calculate this distance, the resulting normal vector direction can change. So, be careful about changing this order.</li>
    <li><strong>Lines 142-153</strong>: If there are no previous normals. The cross product between the calculated distances will generate a perpendicular vector, it's the face normal to the current triangle (<a href='http://www.euclideanspace.com/maths/algebra/vectors/vecAlgebra/cross/index.htm'  target="_blank">Cross Product info</a>). Then the crease angles are calculated, so we add the current normal vector to the normalBuffer. When we sum two vectors we are automatically calculating the middle vector. (<a href='http://www.euclideanspace.com/maths/algebra/vectors/vecAlgebra/index.htm'  target="_blank">Add Vector info</a>).
IMPORTANT: We're not normalizing the vector in this step, by doing so we make sure each triangle area (size) will be considered, because they have different lengths.</li>  
    <li><strong>Lines 158-168</strong>: If there are previous calculated normals. We calculate the current face normal, because the final format for normals are always vertex normal, not face normal. For the next steps we'll need the face normal.</li>
</ul>

<p><br/>  </p>

<h2><strong>The Crease Angle - Step 2</strong></h2>  

<p>As I said before, the crease angle is a crucial part to create perfect normals, otherwise your normals will look correct for some meshes but very strange in other ones. My crease angle is recursive, that means, it will automatically break the vertex index and count as many times as necessary. So if one single vertex has 3 different normals, this function will break it 3 times.</p>

<p>This function must be called once per vertex. In the above code it's called at: <br />
<code>i1 = creaseAngle(i1, normal, &amp;normalBuffer, &amp;<em>nCount, multiples); <br />
i2 = creaseAngle(i2, normal, &amp;normalBuffer, &amp;</em>nCount, multiples); <br />
i3 = creaseAngle(i3, normal, &amp;normalBuffer, &amp;_nCount, multiples);</code></p>

<p>So the original vertex index can become a new index and this new one will be used in new normal vector and in tangent space (tangent and bitanget).</p>

<table width="675">  
<tbody>  
<tr>  
<th>Calculating Crease Angle</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">  
// Defines the maximum crease angle ~80.
#define kCreaseAngle        0.2

// Checks the crease angle for the normal calculations.
// This function creates and divides the normals to a vertex.
static unsigned int creaseAngle(unsigned int index,  
                                vec3 vector,
                                vec3 **buffer,
                                unsigned int *count,
                                NSMutableDictionary *list)
{
    NSNumber *newIndex, *oldIndex;

    // Eliminates the NaN points.
    (*buffer)[index] = vec3Cleared((*buffer)[index]);

    // Checks if the informed normal vector is not zero.
    if (!vec3IsZero((*buffer)[index]))
    {
        // Calculates the cosine of the angle between the current normal vector and the
        // averaged normal in the buffer.
        float cos = vec3Dot(vec3Normalize(vector), vec3Normalize((*buffer)[index]));

        // If the cosine is greater than the crease angle, that means the current normal vector
        // forms an acceptable angle witht the averaged normal in the buffer. Otherwise, proceeds and
        // creates a new normal vector to the current triangle face.
        if (cos <= kCreaseAngle)
        {
            // Tries to retrieve an already buffered normal with the same bend.
            oldIndex = [NSNumber numberWithInt:index];
            newIndex = [list objectForKey:oldIndex];

            // If no buffer was found, create a new register to the current normal vector.
            if (newIndex == nil)
            {
                // Retrieves the new index and stores its value as a linked list to the old one.
                newIndex = [NSNumber numberWithInt:*count];
                [list setObject:newIndex forKey:oldIndex];
                index = [newIndex intValue];

                // Reallocates the buffer and set the new buffer value to zero, avoiding NaN.
                *buffer = realloc(*buffer, NGL_SIZE_VEC3 * ++*count);
                (*buffer)[index] = kvec3Zero;
            }
            // Otherwise, repeat the process with the buffered value to check for new crease angles.
            else
            {
                index = creaseAngle([newIndex intValue], vector, buffer, count, list);
            }
        }
    }

    return index;
}
</pre>

<p>Understanding line by line:</p>

<ul>  
    <li><strong>Line 12</strong>: Clearing any possible NaN value. If a NaN value is found, it'll be set to 0.0f (zero).</li>
    <li><strong>Line 15</strong>: Making sure that an empty buffer index will be skipped, because there is no reason to calculate the crease angle between a face and no other.</li>
    <li><strong>Line 19</strong>: Calculating the "dot" product (<a href='http://www.euclideanspace.com/maths/algebra/vectors/vecAlgebra/dot/index.htm'  target="_blank">Dot Product info</a>) we get the cosine of the angle formed between two faces. These faces are: The new normal vector and the value that is already in the normal buffer. Both of them must be normalized at this point, otherwise their lengths will affect the resulting dot product.</li>
    <li><strong>Line 24</strong>: Check if the calculated cosine is bellow the crease angle. So, if a cosine value is lower than the crease angle it means the calculated angle is higher than the maximum angle. Then the process of breaking the buffer index will start.</li>
    <li><strong>Lines 27-31</strong>: Creating NSNumbers. The old value is, actually, the current index parameter. The line 28 tries to retrieve if there is an already calculated broken index referenced by the current index. If there is a previous calculated reference, that means, if the current index was broken before, then the crease angle function will be called again. Otherwise, the process of breaking the buffer index will continue.</li>
    <li><strong>Lines 34-36</strong>: The new buffer index will be created in the end of the current buffer array. This new index will be stored and linked with the current index. So, if the current index appear again we already know where is it broken index.</li>
    <li><strong>Lines 39-40</strong>: Reallocating the memory of the buffer array and setting the new index to zero, avoiding NaN values.</li>
    <li><strong>Line 50</strong>: The returned index will always be one of these two situations:
<ol>  
    <li>An empty buffer index (zero vector).</li>
    <li>A buffer index in which its stored normal forms an acceptable angle with the new normal.</li>
</ol></li>  
</ul>

<p>Nice! Now we have the correct calculated Normals, respecting the crease angle. Besides if there is a previous calculated normals, this routine respects that and calculates only the face normal again.</p>

<p>Now it's time to enter in the Tanget Space.</p>

<p><br/>  </p>

<h2><strong>Calculating the Tanget Space - Step 3</strong></h2>  

<p>As you know the Tangent Space is formed by the Tangent Vector and the Bitangent Vector. Just to make this point clear, the word "Binormal" is wrong in the 3D context. Because a 2D circle can have a Binormal, however there is only one Normal Vector in the 3D space. So the perpendicular angle between the Normal and the Tangent is the Bitangent. Some guys still calling as Binormal, we can understand what they want to say, but we know this term is a misspelling, there is no Binormal in the 3D space.</p>

<p>OK, now let's understand the concept of the Tangent Space.</p>

<p>The Tangent Space is a local stuff related at how the light interacts on each face of the surface. But the tangent space doesn't exist in the reality, right? Right, it doesn't. The tangent space is something created as a workaround to use the "Bump Technique". Now a day, there are many bump techniques, like bump map, parallax map , displacement map and many others. Independent of the technique the tangent space vector must be calculated.</p>

<p>The tangent and bitangent are orthogonal vector with the Normal and instruct us about the direction of the face's texture coordinate (U and V map directions). This direction will be used to calculate the light based on the RGB colors of the bump image file. </p>

<p>So in short, the Tangent Space is just a convention that we create as a workaround the bump technique. Here is how we'll calculate the Tangent Space.</p>

<table width="675">  
<tbody>  
<tr>  
<th>Calculating Tangent Space</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">

// CONTINUING...

        //*************************
        //  Tangent Space
        //*************************
        if (hasTextures)
        {
            // The crease angle process produces splits on the per-vertex normals, as the tangent space
            // must be orthogonalized, the tangent and bitanget follow those splits.
            if (_nCount > _taCount)
            {
                // Normals, Tangents and Bitangents buffers will always have the same number of elements.
                tangentBuffer = realloc(tangentBuffer, sizeof(vec3) * _nCount);
                bitangentBuffer = realloc(bitangentBuffer, sizeof(vec3) * _nCount);

                // Setting the brand new buffers to 0 (zero).
                lengthJ = _nCount;
                for (j = _taCount - 1; j < lengthJ; ++j)
                {
                    tangentBuffer[j] = kvec3Zero;
                    bitangentBuffer[j] = kvec3Zero;
                }

                _taCount = _biCount = _nCount;
            }

            // Retrieves texture coordinate indices.
            ti1 = _faces[i * oldFaceStride + tOffset] * tLength;
            ti2 = _faces[(i + 1) * oldFaceStride + tOffset] * tLength;
            ti3 = _faces[(i + 2) * oldFaceStride + tOffset] * tLength;

            // Retrieves the texture coordinates.
            tA = (vec2){_texcoords[ti1], _texcoords[ti1 + 1]};
            tB = (vec2){_texcoords[ti2], _texcoords[ti2 + 1]};
            tC = (vec2){_texcoords[ti3], _texcoords[ti3 + 1]};

            // Calculates the vector of the texture coordinates edges, the distance between them.
            tdistBA = vec2Subtract(tB, tA);
            tdistCA = vec2Subtract(tC, tA);

            // Calculates the triangle's area.
            area = tdistBA.x * tdistCA.y - tdistBA.y * tdistCA.x;

            //*************************
            //  Tangent
            //*************************
            if (area == 0.0f)
            {
                tangent = kvec3Zero;
            }
            else
            {
                delta = 1.0f / area;

                // Calculates the face tangent to the current triangle.
                tangent.x = delta * ((distBA.x * tdistCA.y) + (distCA.x * -tdistBA.y));
                tangent.y = delta * ((distBA.y * tdistCA.y) + (distCA.y * -tdistBA.y));
                tangent.z = delta * ((distBA.z * tdistCA.y) + (distCA.z * -tdistBA.y));
            }

            // Averages the new tagent vector with the oldest buffered.
            tangentBuffer[i1] = vec3Add(tangent, tangentBuffer[i1]);
            tangentBuffer[i2] = vec3Add(tangent, tangentBuffer[i2]);
            tangentBuffer[i3] = vec3Add(tangent, tangentBuffer[i3]);

            //*************************
            //  Bitangent
            //*************************
            // Calculates the face bitangent to the current triangle,
            // completing the orthogonalized tangent space.
            bitangent = vec3Cross(normal, tangent);

            // Averages the new bitangent vector with the oldest buffered.
            bitangentBuffer[i1] = vec3Add(bitangent, bitangentBuffer[i1]);
            bitangentBuffer[i2] = vec3Add(bitangent, bitangentBuffer[i2]);
            bitangentBuffer[i3] = vec3Add(bitangent, bitangentBuffer[i3]);
        }

// CONTINUE...

</pre>

<p>Understanding line by line:</p>

<ul>  
    <li><strong>Line 6</strong>: Just create tangent space if there is texture coordinates on the mesh.</li>
    <li><strong>Lines 10 - 25</strong>: Checking if the "Crease Angle" routine changed the normals. If so, adjust the tangents and bitangents buffers to have the same size as the normals buffer.</li>
    <li><strong>Lines 27 - 42</strong>: Calculates the direction of the UV coordinates and the area of the current face (triangle).</li>
    <li><strong>Lines 47 - 64</strong>: Calculates the tangent vector for the current triangle and add this value into the tangents buffer.</li>
    <li><strong>Lines 71 - 76</strong>: As the Tangent Space is formed by three orthogonal vectors, we can calculate the last one (bitangent) very easy just by calculating the cross product between the Normal and Tangent.</li>
</ul>

<p>OK, that's all, it's done! <br />
Now we have all that we need: Normals, Tangents and Bitangents. They all are pretty perfect now and we just need to bring their values back into the arrays format. You can make this last step as you wish, I'll show you here a way that I'm used to.</p>

<p><br/>  </p>

<h2><strong>Updating the Original Arrays - Step 4</strong></h2>  

<p>OK, as I told you at the start, we'll use the "Array of Structures" in our final format. So we need to create/update that array and its indices based on the new elements, which may include Normals, Tangents and Bitangents.</p>

<p>I'll rewrite the <code>"calculateTangentSpace()"</code> function just to you take a look at it without breaks/continues.</p>

<table width="675">  
<tbody>  
<tr>  
<th>Puting All Together</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">

// CONTINUING...        

        // Copies the oldest face indices and inserts the new tangent space indices.
        outFaces = newFaces + (i * _faceStride);
        lengthJ = _faceStride;
        for (j = 0; j < lengthJ; ++j)
        {
            *outFaces++ = (j < oldFaceStride) ? _faces[i * oldFaceStride + j] : i1;
        }

        outFaces = newFaces + ((i + 1) * _faceStride);
        for (j = 0; j < lengthJ; ++j)
        {
            *outFaces++ = (j < oldFaceStride) ? _faces[(i + 1) * oldFaceStride + j] : i2;
        }

        outFaces = newFaces + ((i + 2) * _faceStride);
        for (j = 0; j < lengthJ; ++j)
        {
            *outFaces++ = (j < oldFaceStride) ? _faces[(i + 2) * oldFaceStride + j] : i3;
        }
    }

    // Commits the changes for the original array of faces. At this time, it could looks like:
    // iv1, it1, in1, ita1, ibt1, iv2, it2, in2, ita2, ibt2,...
    _faces = realloc(_faces, sizeof(int) * (_faceNumber * _faceStride));
    memcpy(_faces, newFaces, sizeof(int) * (_faceNumber * _faceStride));

    // Reallocates the memory for the array of normals, if needed. 
    if (!hasNormals)
    {
        _normals = realloc(_normals, sizeof(vec3) * _nCount);
    }

    // Reallocates the memory for the array of tangents and array of bitangents, if needed.
    if (hasTextures)
    {
        _tangents = realloc(_tangents, sizeof(vec3) * _taCount);
        _bitangents = realloc(_bitangents, sizeof(vec3) * _biCount);
    }

    // Loops through all new values of the tangent space, normalizing all the averaged vectors.
    length = _nCount;
    for (i = 0; i < length; i++)
    {
        // Puts the new normals, if needed.
        if (!hasNormals)
        {
            normal = vec3Normalize(normalBuffer[i]);
            outValue = _normals + (i * 3);
            *outValue++ = normal.x;
            *outValue++ = normal.y;
            *outValue = normal.z;
        }

        // Puts the new tangent and bitangent, if needed.
        // Isn't necessary here the Gram–Schmidt Orthogonalization process, because all the vectors
        // of the tangent space are already orthogonalized in reason of the crease angle approach.
        if (hasTextures)
        {
            tangent = vec3Normalize(tangentBuffer[i]);
            bitangent = vec3Normalize(bitangentBuffer[i]);

            outValue = _tangents + (i * 3);
            *outValue++ = tangent.x;
            *outValue++ = tangent.y;
            *outValue = tangent.z;

            outValue = _bitangents + (i * 3);
            *outValue++ = bitangent.x;
            *outValue++ = bitangent.y;
            *outValue = bitangent.z;
        }
    }

    // Frees the memories.
    free(newFaces);
    free(normalBuffer);
    free(tangentBuffer);
    free(bitangentBuffer);

    [multiples release];
}
</pre>

<p>Let's understand more about those last lines:  </p>

<ul>  
    <li><strong>Lines 4 - 21</strong>: Using a pointer to insert the new face indices in the "newFaces" array, pointers is much faster than accessing the array by indices. These lines also respect the old face indices to place the new ones in the last positions.</li>
    <li><strong>Lines 26 - 27</strong>: Updating the original "array of faces". As you can see in the comments, at this point the new "array of faces" will be as "index of vertex 1, index of texcoord 1, index of normal 1, index of tangent 1, index of bitangent 1, index of vertex 2, ...". Remember that in this article I'll not include the job of converting those array into one single "array of structures", however with this "array of faces" you can do it very easy.</li>
    <li><strong>Lines 30 - 40</strong>: Preparing the Normals, Tangents and Bitangents to receive the calculated values.</li>
    <li><strong>Lines 43 - 74</strong>: Updating the Normals, Tangents and Bitangents values using pointers.</li>
    <li><strong>Lines 77 - 82</strong>: Clearing the allocated memory.</li>
</ul>

<p><br/><a name="conclusion"></a>  </p>

<h2><strong>Conclusion</strong></h2>  

<p>WOW, a lot of code in this article, don't you think? I'll try to simplify the steps, as you may notice was only 4 steps until here. So, here is all that you need:  </p>

<ol>  
    <li>Calculate the Normals or use the Face Normals you already has.</li>
    <li>When you need to calculate the Normals you must calculate the crease angle and split the Normals according to this calculation.</li>
    <li>Calculate the Tangent Space, which includes Tangent and Bitangent.</li>
    <li>Update the original arrays, including the Normals, Tangents and Bitangents.</li>
</ol>

<p>OK, my friends, now you have all that you need to construct your preferable array for OpenGL ("array of structures" or "structure of arrays").</p>

<p>In my next article will continue the series about shaders. Let's talk about shader some techniques advanced skills in OpenGL ES Shaders.</p>

<p>If you have any doubts, just Tweet me:  </p>

<script src='http://platform.twitter.com/widgets.js'  type="text/javascript"></script>  

<p><a href='http://twitter.com/share?&amp;url=&amp;text=@dineybomfim' &amp;related=dineybomfim" class="twitter-share-button" data-related="dineybomfim" data-text="@dineybomfim " data-count="none" data-url="">Tweet</a> </p>

<p>See you soon!</p>

<iframe scrolling="no" src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  width="100%" height="130px"></iframe>]]></description><link>http://blog.db-in.com/calculating-normals-and-tangent-space/</link><guid isPermaLink="false">31d62b67-f55b-449d-a250-5ba5c3811598</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Tue, 04 Feb 2014 01:48:50 GMT</pubDate></item><item><title><![CDATA[Cameras on OpenGL ES 2.x - The ModelViewProjection Matrix]]></title><description><![CDATA[<p><img src='http://db-in.com/images/mvp_article_image.jpg'  alt="" title="mvp_article_image" width="300" height="239" class="alignleft size-medium wp-image-1349" />Hello my friends!</p>

<p>In this article I'll talk about a very very important part of the 3D world. As you already know, this world behind of our devices' screen is just an attempt to recreate the beauty and complexity of the human eye. To do that we use cameras, which is in the real world the simulation of the human eye. To construct cameras we use mathematical equations.</p>

In this article I'll treat about those cameras and equations behind it, the difference between convex and concave lenses, what are projections, matrices, quaternions and finally the famous Model View Projection Matrix. If you have some doubt, you know, just ask, I'll be glad to help.  
<!--more-->  

<p>Here is a little list of contents to help you to find something you want in this tutorial.</p>

<p><a name="list_contents"></a>  </p>

<table width="675">  
<tr>  
<th colspan=2>List of Contents to this Tutorial</th>  
</tr>  
<tr><td valign="top">  
<ul>  
    <li><a href="#real_cameras">Cameras in the real world</a></li>
    <li><a href="#3d_history">3D history</a></li>
    <li><a href="#projections">Projections</a></li>
    <li><a href="#3d_cameras">Cameras in the 3D world</a></li>
    <li><a href="#3d_code">The code behind the 3D world</a>
        <ul>
            <li><a href="#matrices">Matrices</a></li>
            <li><a href="#matrices_deep">Matrices in Deep</a></li>
            <li><a href="#quaternions">Quaternions</a></li>
        </ul></li>
    <li><a href="#camera_code">The code behind the 3D cameras</a></li>
    <li><a href="#conclusion">Conclusion</a></li>
</ul>  
</td></tr>  
</table>

<p><br/>  </p>

<h2><strong>At a glance</strong></h2>  

<p>First let's see the basic about cameras, how it works in real world, the lens differences, how a zoom works, translations, rotations and some similar concepts. Right after consolidate these concepts let's enter deeply in the OpenGL and understand how all of that can fit in our application. So we finally go to the code, I'll give you the equations and explain how they work.</p>

<p>Do you remember from my OpenGL serie of tutorials when I said that Khronos has delegated many responsibilities and got focus on the most import part of the 3D world? (more precisely from the part 1, if not, you can <a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-1'  target="_blank">check it here</a>) Well, from OpenGL ES 1.x to 2.x the cameras was one of those responsibilities that Khronos delegated. So now we must to create the cameras by ourselves. And wanna know? I Love it! With the shaders behavior we can have an amazing control on our applications, more than this, we are free to construct awesome 3D engines.</p>

<p>With the OpenGL controlling the cameras, we had only two or three kinds of cameras. But when we started programming the cameras by ourselves, we are able to create any kind of cameras. In this article I'll talk about the basic cameras: Orthogonal Camera and Perspective Camera.</p>

<p>OK, let's start!</p>

<p><br/><a name="real_cameras"></a> <br />
<h2><strong>Cameras in the real world</strong></h2><a href="#list_contents">top</a> <br />
The human's eye is as convex lens, it converge the image to form the upside-down image on retina. Usually the camera's lens is formed by multiple lenses convexes and concaves ones, but the final image looks like more a convex lens, just like the human's eye.</p>

<p>The final image depend on many factors, not just the type of the lens, but in general words, the image bellow shows how a picture looks like behind each kind of lens. <br />
<img src='http://db-in.com/images/convex_concave_example.jpg'  alt="A picture behind each kind of lens." title="convex_concave_example" width="600" height="600" class="size-full wp-image-1351" /></p>

<p>Both kinds could produce an image equal to the original one, I mean, with a tiny angle of distortion, depending on the distance of the object from the lens and the angle of view. The next image will show the most important attributes of a camera. <br />
<img src='http://db-in.com/images/camera_elements_example.jpg'  alt="Camera attributes." title="camera_elements_example" width="600" height="392" class="size-full wp-image-1352" /></p>

<p>The red areas in the image above are not visible to the camera, so any fragment in those areas will be clipped. The "Depth of Field" is the visible area, all fragments inside it will be visible. Commonly the word "Depth of Field" is also used to describe an special effect, the effect of Lens Blur. As the human's eyes have focus which make the object outside the focus seems blurred, the Lens Blur effect simulate that focus, making objects outside the focus seems blurred. So why I didn't put the attribute "Focus" on the image above? Because the focus is a special feature in just some cameras, the basic cameras in 3D doesn't implements focus behavior. The other important attribute is the "Angle of View", which represents the horizontal angle visible to the camera. Any fragment outside this angle will not be visible to the camera. Some times this "Angle of View" is also used to represent the vertical area, but usually we prefer to define the aspect ratio of the final image using the width and height.</p>

<p>The modern cameras are very accurate and can produce awesome effects using that attributes and combining the lenses types. Now let's back to our virtual world and see how we can transpose those attributes and behaviors mathematically. But before moving to the 3D cameras, we need to understand a little bit more about the maths in 3D world.</p>

<p><br/><a name="3d_history"></a> <br />
<h2><strong>Short history about 3D world</strong></h2><a href="#list_contents">top</a> <br />
Our grandpa of 3D world is Euclid, also known as Euclid of Alexandria. He lived in 323–283 BC (whoa, is a little bit old!) in the Greek city Alexandria. Euclid created what we use until today called Euclidean Space and Euclidean Geometry, I'm sure you heard these names before. Basically Euclidean Space is formed by 3 planes which give us the axis X, Y and Z. Each of those planes uses the traditional geometry, which has a lot of contribution from another greek, Pythagoras (570BC - 495 BC). Well it's not hard to figure out why Euclid developed their concepts, you know, the greeks love architecture and in order to construct perfect forms they needed to make all the calculus in a 3D imaginary world, without talk about their phylosophy and their passion by science.</p>

<p>Advancing many years ahead in our Time Machine, we out in the beginning of 17th century, where a great man called René Descartes created something called cartesian coordinate system. That was amazing! It has created the bridge between the Euclid's theory and the linear algebra, introducing the matrices into Euclidean Transformations (translate, scale and rotation). Euclidean Transformations was made with traditional Pythagoras approaches, so you can imagine how many calculations was there, but thanks to Descartes we are able to make Euclidean Transformations using matrices. Is simple, is fast, is pure beauty! Matrices in the 3D world are awesome!</p>

<p>But the matrices with Euclidean Transformations were not perfect. They produce some problems, the biggest one is related to rotations and is called Gimbal Lock. It happens when you try to rotate a plane and unintentionally the other two planes touche themselves, so the next rotation of one of those two planes will produce the Gimbal Lock, that means, it will involuntary rotate both locked axis. Many years later another great man called Sir William Rowan Hamilton, in 1843, created a method to deal with Euclidean Rotations and avoiding the Gimbal Lock, Hamilton created something called Quaternions! Quaternion is faster, better and the most elegant way to deal with 3D rotations. A Quaternion is composed by an imaginary part (complex number) and a real part. As in the 3D world we always use calculations with unity vectors (vectors with their magnitude/length equals to 1) we can discard the imaginary part of Quaternions and work only with the real numbers. Precisely, Quaternions was a Hamilton's thesis which include much more than just 3D rotations, but to us and to 3D world, the principal application is to deal with rotations.</p>

<p>OK, what does all of that have to do with cameras? It's simple, based on all this we start using 4x4 matrices to deal with Euclidean Transformations and use a vector with 4 elements to describe a point the space (X,Y,Z,W). The W is an Homogeneous Coordinate element. I'll not talk about it here, but just to let you know Homogeneous Coordinates was created by  August Ferdinand Möbius in 1827 to deal with the concept of infinity in the Cartesian System. We'll talk about Möbius contribuition later on, but shortly, the concept of infinity is very complex to fit into the cartesian system, we could use an complex imaginary number for it, but this is not so good to real calculations. So to solve this problem, Möbius just added one variable W, which is a real number, and took us back to the world of real numbers. Anyway. The point is that a matrix 4x4 fits perfectly with vector 4 and as we use a single matrix to make the Euclidean Transformations into our 3D world, we think that it could be a good idea to use the same 4x4 matrix to deal with a camera in the 3D world.</p>

<p><img src='http://db-in.com/images/matrix_quaternion_example.jpg'  alt="Visual representation of a Matrix 4x4 and a Quaternion." title="matrix_quaternion_example" width="600" height="497" class="size-full wp-image-1353" /></p>

<p>The image above shows how a Matrix 4x4 and a Quaternions looks like visually. As you saw, the matrix has 3 independent slots for translation (position X,Y,Z), but the other instructions are mixed in its red area. Each rotation (X,Y and Z) affects 4 slots and each scale (X,Y and Z) affects 1 slot. A quaternion have 4 real numbers, 3 of then represent vertex (X,Y and Z) and this vertex forms a direction. The fourth value represents the rotation around its own pivot. We'll talk about quaternion later, but one of its coolest features is that we can extract a rotation matrix from it. We can do so by constructing a Matrix 4x4 with only the yellow elements filled up.</p>

<p>You could think now: "WTF!". Calm down, the practice is not so hard as the theory! Before put hands on code, we need to take just one more concept: the projections.</p>

<p><br/><a name="projections"></a> <br />
<h2><strong>Projections</strong></h2><a href="#list_contents">top</a> <br />
Instead of explaining this in a technically manner to you, I'll just show you! I'm sure you already know the difference of both projections types, maybe with another names, but I'm sure you know what that means: <br />
<img src='http://db-in.com/images/projection_example.gif'  alt="Differences between Orthographic and Perspective Projection." title="projection_example" width="600" height="491" class="size-full wp-image-1354" /></p>

<p><img src='http://db-in.com/images/projection_simcity_example.jpg'  alt="" title="projection_simcity_example" width="315" height="235" class="alignleft size-full wp-image-1359" />Did you see? It's very simple, the Orthographic projection is commonly used in 2D games, like Sim City or The Sims (old versions) or even the best seller Diablo (except the new Diablo III, which is really using the Perspective projection) . The Orthographic projection doesn't exist in real world, it's impossible to the human's eyes receive the images at this way, because there is always has a vanish point on the images formed by our eyes. So the real cameras always capture the image with a Perpective projection.</p>

<p>Many people ask me about 2D graphics with OpenGL, well, here is my first tip about how to do it, you will need an Orthographic projection to create games like Sim City or The Sims. Games like Starcraft use a Perpective projection simulating an Orthographic projection. Is it possible? Yes! As everything related to a lens behavior, the final image lies on many factors, for example, a Perpective projection with a great Angle of View could seems more like an Orthographic projection, I mean, it's like to look to the ground from an airplane on air. From that distance, the cities look like a mockup and the vanish point seems no effect.<img src='http://db-in.com/images/projection_starcraft_high_example.jpg'  alt="" title="projection_starcraft_high_example" width="315" height="235" class="alignright size-medium wp-image-1361" /></p>

<p>Before continuing we need to make a little digression to understand in deep the difference between those two projections. You remember from René Descartes and its cartesian system, right? In the linear algebra two parallel lines never touch, even at the inifity. How we could deal with the infinity idea in the linear algebra? Using calculations with ∞(infinity denotation)? It's not useful. To create a Perspective projection we really need a vanish point and with it two parallel lines must touch at the infinity. So, how we can solve that? We can't! At least not using the linear algebra knowledge. We'll need something else.</p>

<p>Thanks to a man called August Ferdinand Möbius we can deal with this little problem. This man created something called "Homogeneous Coordinates".<img src='http://db-in.com/images/vanish_point_example.jpg'  alt="" title="vanish_point_example" width="300" height="300" class="alignleft size-full wp-image-1360" /> The idea is incredibly simple, is unbelievable (just as I like). Möbius just add 1 last coordinate number to any dimensional system the coordinate <em>w</em>. The 2D becomes 2D + 1 (x,y -> x,y,w), the 3D becomes 3D + 1 (x,y,z -> x,y,z,w). In the space calculations we just divide our original value by <em>w</em>, just it! Look at this pseudo code:</p>

<table>  
<tr><td>  
// This is the original cartesian coordinates.
x = 3.0;  
y = 1.5;

// This is new homogeneous coordinate.
w = 1.0;

// Space calculations.
finalX = x/w;  
finalY = y/w;  
</td></tr>  
</table>

<p>The <em>w</em> will be 1.0 in the majority of cases. It will change just to represent the ∞ (infinity), in this case <em>w</em> will be 0.0! "WTF!!! Divisions by 0?" Not exactly. The <em>w</em> is often used to solve systems of two equations, so if we got a 0, the projection will be sent to the infinity. I know, seems confused in theory, but a simple practical example is the generation of shadows. When we have a light in a 3D scene, a light at the infinity or without attenuation, like a sun light in 3D world, we can simple create the shadows generated by that light using the <em>w</em> equals to 0. At this way the shadow will be projected on a wall or a floor exactly as the original model is. Obviously the lights and shadows in the real world in stupidly more complex than this, but remember that in the our virtual 3D world we are just kidding of copy the real behavior. With some few more steps we can simulate a shadow behavior more realistic, this is very good to a professional 3D software implements in its render, but to a game is not a good solution, more realistic shadow takes a lot of processing on the CPU and GPU. To a game, casts the shadows using the Möbius is pretty simple and looks very nice to the player!</p>

<p>OK, this is all about Projections, now let's move to the OpenGL and see how we can implement all these concepts with our code. Matrices and Quaternions will be our allies.</p>

<p><br/><a name="3d_cameras"></a> <br />
<h2><strong>Cameras in the 3D world</strong></h2><a href="#list_contents">top</a> <br />
The first thing we need to understand is about how the transformations happen in the 3D world. Once we defined a 3D object structure, its structure will remain intact (structure is vertices, texture coordinates and normals). What will change at frame by frame will be just some matrices (often is just one matrix). Those matrices will produce temporary changes based on the original structures. So bear it in mind, "the original structure will never change"!</p>

<p>Thereby, when we rotate an object on the screen, in deeply, what we are doing is creating a matrix which contains information to make that rotation factor happen. Then in our shaders, when we multiply that matrix by the vertices of our object, on the screen, the object seems to be rotating. The same is true for any other 3D elements, like lights or cameras. But the camera object has a special behavior, all the transformations on it <strong>must be inverted</strong>. The examples below can help to understand this issue:</p>

<p><img src='http://db-in.com/images/camera_behavior_1_example.gif'  alt="Rotating the camera in CW around an object." title="camera_behavior_1_example" width="600" height="600" class="size-full wp-image-1362" /> <br />
<img src='http://db-in.com/images/camera_behavior_2_example.gif'  alt="Rotating the object in CCW around its own Y axis." title="camera_behavior_2_example" width="600" height="600" class="size-full wp-image-1363" /></p>

<p>Notice in the pictures above that the resulting image on the device's screen is the same in both cases. This behavior drives us to an idea: every camera movement is inverted compared to the object space. For example, if the camera goes into +Z axis, this will produce the same effect as sending the 3D object to -Z axis. Rotate the camera in +Y will have the same effect as rotating the 3D object in its local -Y axis. So hold this idea, <strong>every transformation at the camera will be inverted</strong>, we'll use this soon.</p>

<p>Now the next concept about camera is how to interact the local space with the world space. In the examples above, if we think in rotating the object to -Y in local, this will produce the same results as rotating the camera at +Y and moving it in X and Y axis around the object, assuming the camera as the pivot. To deal with it, again the operations with matrices will save our day. To change from local space rotations to global space rotation all that we need is to change the order of the matrices in a multiplication (A x B = local, B x A = global, remember the matrices multiplication is not commutative). So we must <strong>multiply the camera's matrix by the object's matrix</strong>, in this order.</p>

<p>OK, I know, techniques seem very confusing, but trust me, the code is much simpler than you imagine. Let's review those concepts and dive into code.  </p>

<ul>  
    <li>We never change the object structure, what changes is just some matrices, which will be multiplied by the original object's structure to achieve the desired result.</li>
    <li>On the camera, all transformations should be inverted before to construct the matrix.</li>
    <li>As the camera will be our eyes in the 3D world, we assume the camera is always the local space, so the final matrix will be the resulting of CameraMatrix X ObjectMatrix, in this exactly order.</li>
</ul>

<p><br/><a name="3d_code"></a> <br />
<h2><strong>The code behind the 3D world</strong></h2><a href="#list_contents">top</a> <br />
I'll show you the formulas to do all those works  and explain their usages, but I'll not show in deeply the mathematical logical behind the formulas, showing how the formula was created, is not my intention here. If you are interested to know in deeply how the formula was created I suggest you a book, a great book by the way, where all these formulas came from and also a great mathematical website:  </p>

<ul>  
    <li>Recommended book: <a href='http://www.amazon.com/Mathematics-Programming-Computer-Graphics-Second/dp/1584502770/ref=sr_1_2?ie=UTF8&qid=1300892618&sr=8-2'  target="_blank">Mathematics for 3D Game Programming and Computer Graphics</a>.</li>
    <li>WebSite:<a href='http://www.euclideanspace.com/'  target="_blank">http://www.euclideanspace.com</a></li>
</ul>

<p>The EuclideanSpace website has not a good layout, I know, it seems a little amateur, but trust me, all formulas in there are very reliable, ALL the formulas. Navigation is made by the top menu, it could seems confused at a glance, but it's very organized, thinking mathematically.</p>

<p>OK, so let's start with the matrices.</p>

<p><br/><a name="matrices"></a> <br />
<h3>Matrices</h3><a href="#list_contents">top</a> <br />
Some guys see the matrix as black box with magic inside. Well, it  really seems magical what it makes, but it's not exactly a black box. It's more like a very organized package and we can understand how that magic works, what is its "tricks", understanding its organization we could make great things with matrices. Remember that everything that is made by a matrix was also made by Euclid using only the Pythagoras and angle concepts. René Descartes just placed all that knowledge into a single package, called matrix. In 3D world we'll use a 4x4 matrix (4 lines with 4 columns), this kind of matrix is also known as square matrix. The fastest and simplest way to represent a matrix in programming language is through arrays. More specifically a linear unidimensional array with 16 elements.</p>

<p>Using a linear unidimensional array we could represent a matrix by two ways, row-major or column-major. This is just a convention, because in reality pre-multiply a row-major matrix or post-multiply a column-major matrix produces the same result. Well, as OpenGL prefers a column-major notation, let's follow this notation. <br />
That is like the array indices are organized with a column-major notation:</p>

<table width="675">  
<tr>  
<th>Column-Major Notation</th>  
</tr>  
</table>  

<pre class="brush:cpp; gutter:false">  
.
    |    0        4        8        12   |
    |                                    |
    |    1        5        9        13   |
    |                                    |
    |    2        6        10       14   |
    |                                    |
    |    3        7        11       15   |
.
</pre>

<p>Now I'll show separately 5 kinds of matrices: Translation Matrix, Scale Matrix, Rotation X Matrix, Rotation Y Matrix and Rotation Z Matrix. Later we'll see how to join all these into one single matrix.</p>

<p>The most simple operation with 4x4 matrices is the translation, changing the X, Y and Z position. It's very very simple, you don't even need a formula. Here is what you need to do:</p>

<table width="675">  
<tr>  
<th>Translation Matrix</th>  
</tr>  
</table>  

<pre class="brush:cpp; gutter:false">  
.
    |    1        0        0        X    |
    |                                    |
    |    0        1        0        Y    |
    |                                    |
    |    0        0        1        Z    |
    |                                    |
    |    0        0        0        1    |
.
</pre>

<p>The second stupidly  simple operation is the scale. As you may have seen in 3D professional softwares, you can change the scale individually to each axis. This operation doesn't need a formula. Here is what you need to do:</p>

<table width="675">  
<tr>  
<th>Scale Matrix</th>  
</tr>  
</table>  

<pre class="brush:cpp; gutter:false">  
.
    |    SX       0        0        0    |
    |                                    |
    |    0        SY       0        0    |
    |                                    |
    |    0        0        SZ       0    |
    |                                    |
    |    0        0        0        1    |
.
</pre>

<p>Now let's complicate it a little bit. Is time to make rotation with a matrix around one specific axis. We can think about rotations in the 3D world using the Right Hand Rule. The right hand rule defines the positive directions of all the 3 axis, besides it defines the rotations order as well. Align your thumb along a positive direction on an axis, now close the other fingers then the direction that your fingers are pointing to is the positive rotation of that axis:</p>

<p><img src='http://db-in.com/images/rotations_example.jpg'  alt="Rotations using the Right Hand Rule." title="rotations_example" width="600" height="517" class="size-full wp-image-1365" /></p>

<p>We can create a rotation matrix using only one angle in one axis. To do that, we'll use sines and cosines. As you know, the angles should be in radians, not in degrees. To convert degrees to radians we use: <strong>Angle * PI / 180</strong> and to convert from radians to degrees we could use: <strong>Angle * 180 / PI</strong>. Well, my advice here, to gain performance, is: "let the PI / 180 and 180 / PI pre-calculated". Using C macros, I like to use something like this:</p>

<pre class="brush:as3">  
// Pre-calculated value of PI / 180.
#define kPI180     0.017453

// Pre-calculated value of 180 / PI.
#define k180PI    57.295780

// Converts degrees to radians.
#define degreesToRadians(x) (x * kPI180)

// Converts radians to degrees.
#define radiansToDegrees(x) (x * k180PI)
</pre>

<p>OK, so with the values of our rotations in radians, is time to use the following formulas:</p>

<table width="675">  
<tr>  
<th>Rotate X</th>  
</tr>  
</table>  

<pre class="brush:cpp; gutter:false">  
.
    |    1        0        0        0    |
    |                                    |
    |    0      cos(θ)   sin(θ)     0    |
    |                                    |
    |    0     -sin(θ)   cos(θ)     0    |
    |                                    |
    |    0        0        0        1    |
.
</pre>

<table width="675">  
<tr>  
<th>Rotate Y</th>  
</tr>  
</table>  

<pre class="brush:cpp; gutter:false">  
.
    |  cos(θ)     0    -sin(θ)      0    |
    |                                    |
    |    0        1        0        0    |
    |                                    |
    |  sin(θ)     0     cos(θ)      0    |
    |                                    |
    |    0        0        0        1    |
.
</pre>

<table width="675">  
<tr>  
<th>Rotate Z</th>  
</tr>  
</table>  

<pre class="brush:cpp; gutter:false">  
.
    |  cos(θ)  -sin(θ)     0        0    |
    |                                    |
    |  sin(θ)   cos(θ)     0        0    |
    |                                    |
    |    0        0        1        0    |
    |                                    |
    |    0        0        0        1    |
.
</pre>

<p>Maybe you have seen the same formulas in other places with different minus signal in the elements, but remember, often they teach the traditional mathematical way, which uses the row-major notation, so remember here we are using the column-major notation, which fits right within OpenGL process.</p>

<p>Now it's time to join all of those matrices together. Just as a literal number, we need to multiply each one to get the final result. But the matrix multiplication has some special behaviors. You probably remember some of them from your high school or college.</p>

<ul>  
    <li>Matrix multiplication is not commutative. A x B is different than B x A. </li>
    <li>Multiplying A x B is a calculation of the multiplication from each value of A rows by each value from B columns.</li>
    <li>To multiply A x B, A matrix MUST have the number of columns EQUAL to the number of rows in B. Otherwise, multiplication can't be made.</li>
</ul>

<p>Well, in the 3D world we always have square matrices, 4x4 or in some cases 3x3, so we can multiply a 4x4 only for other 4x4. Now, let's dive into the code using an array of 16 elements to compute all the above formulas:</p>

<table width="675">  
<tr>  
<th>Matrix Formulas with Array</th>  
</tr>  
</table>  

<pre class="brush:cpp">  
typedef float mat4[16];

void matrixIdentity(mat4 m)  
{
    m[0] = m[5] = m[10] = m[15] = 1.0;
    m[1] = m[2] = m[3] = m[4] = 0.0;
    m[6] = m[7] = m[8] = m[9] = 0.0;
    m[11] = m[12] = m[13] = m[14] = 0.0;
}

void matrixTranslate(float x, float y, float z, mat4 matrix)  
{
    matrixIdentity(matrix);

    // Translate slots.
    matrix[12] = x;
    matrix[13] = y;
    matrix[14] = z;   
}

void matrixScale(float sx, float sy, float sz, mat4 matrix)  
{
    matrixIdentity(matrix);

    // Scale slots.
    matrix[0] = sx;
    matrix[5] = sy;
    matrix[10] = sz;
}

void matrixRotateX(float degrees, mat4 matrix)  
{
    float radians = degreesToRadians(degrees);

    matrixIdentity(matrix);

    // Rotate X formula.
    matrix[5] = cosf(radians);
    matrix[6] = -sinf(radians);
    matrix[9] = -matrix[6];
    matrix[10] = matrix[5];
}

void matrixRotateY(float degrees, mat4 matrix)  
{
    float radians = degreesToRadians(degrees);

    matrixIdentity(matrix);

    // Rotate Y formula.
    matrix[0] = cosf(radians);
    matrix[2] = sinf(radians);
    matrix[8] = -matrix[2];
    matrix[10] = matrix[0];
}

void matrixRotateZ(float degrees, mat4 matrix)  
{
    float radians = degreesToRadians(degrees);

    matrixIdentity(matrix);

    // Rotate Z formula.
    matrix[0] = cosf(radians);
    matrix[1] = sinf(radians);
    matrix[4] = -matrix[1];
    matrix[5] = matrix[0];
}
</pre>

<p>And here is the code for a multiplication of two 16 arrays representing 4x4 matrices.</p>

<table width="675">  
<tr>  
<th>Matrix Multiplication</th>  
</tr>  
</table>  

<pre class="brush:cpp">  
void matrixMultiply(mat4 m1, mat4 m2, mat4 result)  
{
    // Fisrt Column
    result[0] = m1[0]*m2[0] + m1[4]*m2[1] + m1[8]*m2[2] + m1[12]*m2[3];
    result[1] = m1[1]*m2[0] + m1[5]*m2[1] + m1[9]*m2[2] + m1[13]*m2[3];
    result[2] = m1[2]*m2[0] + m1[6]*m2[1] + m1[10]*m2[2] + m1[14]*m2[3];
    result[3] = m1[3]*m2[0] + m1[7]*m2[1] + m1[11]*m2[2] + m1[15]*m2[3];

    // Second Column
    result[4] = m1[0]*m2[4] + m1[4]*m2[5] + m1[8]*m2[6] + m1[12]*m2[7];
    result[5] = m1[1]*m2[4] + m1[5]*m2[5] + m1[9]*m2[6] + m1[13]*m2[7];
    result[6] = m1[2]*m2[4] + m1[6]*m2[5] + m1[10]*m2[6] + m1[14]*m2[7];
    result[7] = m1[3]*m2[4] + m1[7]*m2[5] + m1[11]*m2[6] + m1[15]*m2[7];

    // Third Column
    result[8] = m1[0]*m2[8] + m1[4]*m2[9] + m1[8]*m2[10] + m1[12]*m2[11];
    result[9] = m1[1]*m2[8] + m1[5]*m2[9] + m1[9]*m2[10] + m1[13]*m2[11];
    result[10] = m1[2]*m2[8] + m1[6]*m2[9] + m1[10]*m2[10] + m1[14]*m2[11];
    result[11] = m1[3]*m2[8] + m1[7]*m2[9] + m1[11]*m2[10] + m1[15]*m2[11];

    // Fourth Column
    result[12] = m1[0]*m2[12] + m1[4]*m2[13] + m1[8]*m2[14] + m1[12]*m2[15];
    result[13] = m1[1]*m2[12] + m1[5]*m2[13] + m1[9]*m2[14] + m1[13]*m2[15];
    result[14] = m1[2]*m2[12] + m1[6]*m2[13] + m1[10]*m2[14] + m1[14]*m2[15];
    result[15] = m1[3]*m2[12] + m1[7]*m2[13] + m1[11]*m2[14] + m1[15]*m2[15];
}
</pre>

<p>As you know, standard C doesn't allow us to return arrays from a function, because we need to pass a pointer to our result array. If you are using a language which allows you to return an array, like JavaScript or ActionScript, you could prefer to return a literal array instead of working with a pointer.</p>

<p>Now one very important thing: "You CAN'T combine matrices directly, like using a rotationX formula above a matrix with another rotationZ, for example. YOU MUST TO CREATE EACH OF THEM SEPARATELY AND THEN MULTIPLY TWO BY TWO UNTIL YOU GET THE FINAL RESULT!".</p>

<p>For example, to translate, rotate and scale an object you must create each matrix separately and then perform the multiplication <strong>((Scale * Rotation) * Translation)</strong> to get the final transformation matrix.</p>

<p>Now let's talk about some tips and tricks with matrix.</p>

<p><br/><a name="matrices_deep"></a> <br />
<h3>Matrices in Deep</h3><a href="#list_contents">top</a> <br />
Is time to open that "black box" and understand what happens inside it. Sometime ago I worked with matrices without understand what exactly happens there, what exactly means pre-multiply or post-multiply a matrix by another, what's the purpose of transpose a matrix, if all matrix are column-majors, why uses the inverse, well...  I need to say that everything in my world changed after watch some classes of Massachusetts Institute of Technology (MIT) about matrices. I want to share that knowledge with you:</p>

<ol>  
    <li>What means pre or post-multiply matrices? This is the order of what the things happens. The second matrix in the multiplication WILL MAKE THE CHANGES FIRST! (lol). If we multiply A x B this means that B will happen first and then A. So if we multiply Rotation x Translation this means that object first will translate and then will rotate. The same is true for scales too.</li>
<img src='http://db-in.com/images/multiplication_order.jpg'  alt="The order in Matrices Multiplication indicates what happens first." title="multiplication_order" width="600" height="700" class="size-full wp-image-1366" />  
    <li>Using the logic above, we can understand why the diference between local rotations and global rotations is just pre or post-multiply one rotation matrix by another. If you always post-multiply the new rotation matrices this means the object will first make the new rotation and then will rotate the old values, this is a local rotation. If you always pre-multiply the new rotations this means the object will first rotate to the old values and then rotate the new, this is a global rotation.</li>
    <li>Any 3D object always has 3 local vectors: Right Vector, Up Vector and Look Vector. These vector are very important to make the Euclidean Transformation on it (Scales, Rotations and Translations). Specially when you are making local transformations. The good news is: do you remember the rotations formulas? What that formulas make is transcribe the rotations angles to the vectors and place them in the matrix. So you can extract these vectors directly from a rotation matrix and the best thing is that these vectors are already normalized in the matrix.</li>
<img src='http://db-in.com/images/local_vectors.jpg'  alt="The local vectors in a matrix with column-major notation." title="local_vectors" width="600" height="600" class="size-full wp-image-1367" />  
    <li>The next cool thing is about orthogonal matrices (don't get confused with orthonormal, which is said about two orthogonal vectors). In theory the orthogonal matrix is that one with real entries whose columns and rows are orthogonal unit vectors, in very simple words, orthogonal matrix is that one we call rotation matrix, without scales! I'll repeat this, it's very important: "Orthogonal is the ROTATION MATRIX, pure rotation matrix, without any scale!". Using the rotations formulas we get only unit vector and they are always orthogonal! What the hell are unit and orthogonal vectors? Is very simple to understand, unit vectors are that vector which the length/magnitude is equal 1, so the name "unit" vector. The orthogonal vector are said about two or more vector which has an angle of 90º between them. Look at the picture above again, notice that Right, Up and Look vectors are always orthogonal in 3D world.
    </li>
    <li>Still in orthogonal matrices, if a matrix is orthogonal this means its inverse is equal its transpose. WOW! This is great! Because to calculate the inverse we need over than 100 multiplications and more than 40 sums, but to calculate the transpose we don't need any calculations, just change the order of some values. This is a big boost on our performance. But why we want the inverse of rotation matrix? To calculate the lights in the shaders! Remember that the real object never changes, we just change the computation of its vertices by multiplying they by a matrix. So to calculate a light, which is in the global space, we need the inverse of rotation matrix. Well, obviously we will need the inverse for other things, like cameras, so use the transpose instead of inverse could be great! And just to don't let any doubts, inverse matrix technacally represents a matrix which if we multiply it by the original (pre or post, doesn't matter here) will produce the matrix identity. In simple words, the inverse matrix is that which will revert all transformations from the original matrix.
    </li>
</ol>

<p>You can extract the values of rotation, scale and transition from a matrix. Unfortunately don't exist a precise way to extract negative scales from a matrix. You could find the formulas to extract the values from a matrix in that book, 3D Game Programming, or in EuclideanSpace website. I'll not talk about those formulas here because I have a better advice: "Instead to try retrieve values from a matrix, is much much better you store user friendly values. Like store the global rotations (X,Y and Z), local scales (X,Y and Z) and global positions (X,Y and Z)."</p>

<p>Now is time to know about the great Hamilton contribution, the Quaternions!</p>

<p><br/><a name="quaternions"></a> <br />
<h3>Quaternions</h3><a href="#list_contents">top</a> <br />
Quaternions is to me the greatest package invention in 3D calculus. If the matrix is very organized and could seem "magical" to some people, the quaternions is what those people call "miracle". Quaternions is unbelievable simple. In stupidly simple words it is: "Take a vector as a direction and rotate it around own axis!".</p>

<p>If you make a little research by, you'll find many discussions about it. Quaternions is very polemic! Some guys love it, others hate it. Some say it is just a fad, others say it is amazing. Why all this around Quaternions? Well, is because using the rotations formulas we found a formula to produce rotations about an arbitrary axis, directly, avoiding the Gimbal Lock, or in other words, that formula produces the same effect as a quaternion (in fact is very similar). I'll not show this formula here, because I don't believe it is a good solution.</p>

<p>About the war of Quaternions X Rotation about an arbitrary axis you'll find people saying which this takes 27 multiplications, plus some sums, sine, cosine, vector magnitude against 21 or 24 multiplications in Quaternions and all this kind of annoying discussion! Whatever!!! With the actual hardwares, you can cut 10.000.000 multiplications in your application and all the gain that you'll have is 0.04 secs (directly on the iPhone 4 the mark was 0.12 with 10.000.000 multiplications)! This is not remarkable. There are several thing much more important than multiplications to boost your application's performance. In reallity the difference between those number will be less than 1.000 per frame.</p>

<p>So what is the crucial point about the Quaternions X Rotations Formulas? My love about Quaternions comes from the fact that it is SIMPLE! Very organized, very clear and incredibly precisely when you make consecutive rotations. I'll show how to work with Quarternions, and you take your own decision.</p>

<p>Let's start with simple concept. Quaternions, as the name sugest, is a vector of order 4 (x,y,z,w). Just for convention we are used to take notation of Quaternions as w,x,y,z using the "w" first. But this really doesn't matter, because all operations with quaternions always will bring the letter x,y,z or w. An alert! Don't confuse the "w" of Quaternions with "w" from Homegeneous Coordinates, those are two things completely different.</p>

<p>As quaternion is a vector 4, many vector operations are applied to it. But just few formulas are really important: multiplication, identity and inverse. Before start with those three formulas, I want to introduce you the formula to extract a matrix from a quaternion. This is the most important one:</p>

<table width="675">  
<tr>  
<th>Quaternion To Matrix</th>  
</tr>  
</table>  

<pre class="brush:cpp; gutter:false">  
.
    // This is the arithmetical formula optimized to work with unit quaternions.
    // |1-2y²-2z²        2xy-2zw         2xz+2yw       0|
    // | 2xy+2zw        1-2x²-2z²        2yz-2xw       0|
    // | 2xz-2yw         2yz+2xw        1-2x²-2y²      0|
    // |    0               0               0          1|

    // And this is the code.
    // First Column
    matrix[0] = 1 - 2 * (q.y * q.y + q.z * q.z);
    matrix[1] = 2 * (q.x * q.y + q.z * q.w);
    matrix[2] = 2 * (q.x * q.z - q.y * q.w);
    matrix[3] = 0;

    // Second Column
    matrix[4] = 2 * (q.x * q.y - q.z * q.w);
    matrix[5] = 1 - 2 * (q.x * q.x + q.z * q.z);
    matrix[6] = 2 * (q.z * q.y + q.x * q.w);
    matrix[7] = 0;

    // Third Column
    matrix[8] = 2 * (q.x * q.z + q.y * q.w);
    matrix[9] = 2 * (q.y * q.z - q.x * q.w);
    matrix[10] = 1 - 2 * (q.x * q.x + q.y * q.y);
    matrix[11] = 0;

    // Fourth Column
    matrix[12] = 0;
    matrix[13] = 0;
    matrix[14] = 0;
    matrix[15] = 1;
.
</pre>

<p>Just like the matrices formulas, this conversion always produces an orthogonal matrix with unit vectors. In some places you may find an arithmetical formula which uses "w²+x²-y²-z²" instead "1-2y²-2z²". Don't worry, this is because the original quaternions from Hamilton was much more complex. They have an imaginary part (i, j and k) and could be more than just unit quaternions. But as in 3D world we always work with unit vectors, we can discard the imaginary part of quaternions and assume which they will always be unit quaternions. Because this optimization, we can use the formula with "1-2y²-2z²".</p>

<p>Now, let's talk about other formulas. First, the multiplication formula:</p>

<table width="675">  
<tr>  
<th>Quaternion Multiplication</th>  
</tr>  
</table>  

<pre class="brush:cpp; gutter:false">  
.
     // Assume that this multiplies q1 x q2, in this order, resulting in "newQ".
    newQ.w = q1.w * q2.w - q1.x * q2.x - q1.y * q2.y - q1.z * q2.z;
    newQ.x = q1.w * q2.x + q1.x * q2.w + q1.y * q2.z - q1.z * q2.y;
    newQ.y = q1.w * q2.y - q1.x * q2.z + q1.y * q2.w + q1.z * q2.x;
    newQ.z = q1.w * q2.z + q1.x * q2.y - q1.y * q2.x + q1.z * q2.w;
.
</pre>

<p>That multiplication formula has exact the same effect as multiply a rotation matrix by another, for example. Just as matrix multiplication, multiplying quaternions is not a commutative operation. so <strong>q1 x q2</strong> is not equals to <strong>q2 x q1</strong>. Note that I'll not show here the arithmetical formula to multiplication. This is because the original multiplication with two vector 4 are much more complex than a cross multiplication by a vector 3 and it needs a matrix multiplication, this could confuse our minds (in fact the arithmetical formula will result in the code above). Let's focus on what is important. But if you have intrest in know more about multiplications with vector 4, try this: <a href='http://www.mathpages.com/home/kmath069.htm'  target="_blank">http://www.mathpages.com/home/kmath069.htm</a></p>

<p>The identity. The identity quaternion produces the identity matrix, here is the formula:</p>

<table width="675">  
<tr>  
<th>Quaternion Identity</th>  
</tr>  
</table>  

<pre class="brush:cpp; gutter:false">  
.
    q.x = 0;
    q.y = 0;
    q.z = 0;
    q.w = 1;
.
</pre>

<p>OK, now this is the formula to invert a quaternion, aka "conjugate quaternion":</p>

<table width="675">  
<tr>  
<th>Quaternion Inverse</th>  
</tr>  
</table>  

<pre class="brush:cpp; gutter:false">  
.
    q.x *= -1;
    q.y *= -1;
    q.z *= -1;

    // At this point is a good idea normalize the quaternion again.
.
</pre>

<p>I love this inverse formula. Because its simplicity! It's so simple! And these three lines of code has the exactly the same effect as take the inverse of a matrix! (Ô.o) <br />
Yes dude, if you are working with quaternions to rotations, instead to invert the matrix, making more than 100 multiplications and sums, you can simple make these three lines above. As I said before, is not because the reduction of processing, but is more in reason of the simplicity! Quaternions are stupidly simple!</p>

<p>Is everything OK until here? Remember to ask, if you have doubts. Let's proceed now to those two formula which are reason of the "Quaternions WAR". Let's put some rotations angles into it. To do that, we have two ways: using the quaternion's concept and informing a vector as direction and an angle to rotate around this direction or we can use the Euler angles (X, Y and Z) informing the three angles as they are. This last one takes more multiplications, but is more user friendly because is just like set the angles to the matrices rotations formulas.</p>

<p>First setting the quaternion by a direction vector and an angle:</p>

<table width="675">  
<tr>  
<th>Axis to Quaternion</th>  
</tr>  
</table>  

<pre class="brush:cpp">  
.
    // The new quaternion variable.
    vec4 q;

    // Converts the angle in degrees to radians.
    float radians = degreesToRadians(degrees);

    // Finds the Sin and Cosin for the half angle.
    float sin = sinf(radians * 0.5);
    float cos = cosf(radians * 0.5);

    // Formula to construct a new Quaternion based on direction and angle.
    q.w = cos;
    q.x = vec.x * sin;
    q.y = vec.y * sin;
    q.z = vec.z * sin;
.
</pre>

<p>To produce consecutive rotations you can make multiply quaternions. Just as the matrix approach, to produce a local or a global rotation just change the order of the multiplication (q1 x q2 or q2 x q1), and remember, just as the matrix, when you multiply q1 x q2 this means: "do q2 first and then q1".</p>

<p>Now, here is the formula to convert Euler Angles to Quaternions:</p>

<table width="675">  
<tr>  
<th>Euler Angles to Quaternion</th>  
</tr>  
</table>  

<pre class="brush:cpp">  
.
    // The new quaternion variable.
    vec4 q;

    // Converts all degrees angles to radians.
    float radiansY = degreesToRadians(degreesY);
    float radiansZ = degreesToRadians(degreesZ);
    float radiansX = degreesToRadians(degreesX);

    // Finds the Sin and Cosin for each half angles.
    float sY = sinf(radiansY * 0.5);
    float cY = cosf(radiansY * 0.5);
    float sZ = sinf(radiansZ * 0.5);
    float cZ = cosf(radiansZ * 0.5);
    float sX = sinf(radiansX * 0.5);
    float cX = cosf(radiansX * 0.5);

    // Formula to construct a new Quaternion based on Euler Angles.
    q.w = cY * cZ * cX - sY * sZ * sX;
    q.x = sY * sZ * cX + cY * cZ * sX;
    q.y = sY * cZ * cX + cY * sZ * sX;
    q.z = cY * sZ * cX - sY * cZ * sX;
.
</pre>

<p>As you saw, I organized the order of code to make the angles in order Y, Z and X. Why? Because this is the order in which the rotation will be produced by the quaternion. Using this formula, can we change this order? NO, we can't. This formula is to produce this kind of rotation (Y,Z,X). By the way, this is what we call "Euler Rotation Order". If you want to know more about rotation order, or what that means, watch this video, is really great <a href='http://www.youtube.com/watch?v=zc8b2Jo7mno' >http://www.youtube.com/watch?v=zc8b2Jo7mno</a></p>

<p>Great! This is the basic about quaternions. Obviously we have formulas to retrieve the values from a Quaternion, extract the Euler angles, extract the vector direction, etc. This kind of thing is good just to you check what is happening inside the quaternions. My advice here is the same as the matrix: "Always store an user friendly variable to control your rotations".</p>

<p>Now let's back to the matrices and finally understand how we can create camera lenses.</p>

<p><br/><a name="camera_code"></a> <br />
<h2><strong>The code behind the 3D cameras</strong></h2><a href="#list_contents">top</a> <br />
Wow! Finally we are ready to understand how to create a camera lens. Now is easy to figure out what we need to do. We need to create a matrix which affect the vertices positions according to their depth. By using some concepts we saw at the beginning (Depth of Field, Near, Far, Angle of View, etc) we can calculate a matrix to make elegants and smooth transformations to simulate the real lenses, which is already a simulation of the human's eyes.</p>

<p>As explained early, we can create two kind of projections: Perpective and Orthographic. About the both kinds, I'll not explain here the mathematical formula in deep, if you are intrest in the concepts behind the projection matrices, you can find a real good explanation here: <a href='http://www.songho.ca/opengl/gl_projectionmatrix.html' >http://www.songho.ca/opengl/gl_projectionmatrix.html</a>. Oh right, so now let's focus on the code. Starting with the most basic kind, the Orthographic projection:</p>

<table width="675">  
<tr>  
<th>Orthographic Projection</th>  
</tr>  
</table>  

<pre class="brush:cpp">  
.
    // These paramaters are lens properties.
    // The "near" and "far" create the Depth of Field.
    // The "left", "right", "bottom" and "top" represent the rectangle formed
    // by the near area, this rectangle will also be the size of the visible area.
    float near = 0.001, far = 100.0;
    float left = 0.0, right = 320.0, bottom = 480.0, top = 0.0;

    // First Column
    matrix[0] = 2.0 / (right - left);
    matrix[1] = 0.0;
    matrix[2] = 0.0;
    matrix[3] = 0.0;

    // Second Column
    matrix[4] = 0.0;
    matrix[5] = 2.0 / (top - bottom);
    matrix[6] = 0.0;
    matrix[7] = 0.0;

    // Third Column
    matrix[8] = 0.0;
    matrix[9] = 0.0;
    matrix[10] = -2.0 / (far - near);
    matrix[11] = 0.0;

    // Fourth Column
    matrix[12] = -(right + left) / (right - left);
    matrix[13] = -(top + bottom) / (top - bottom);
    matrix[14] = -(far + near) / (far - near);
    matrix[15] = 1;
.
</pre>

<p>As you noticed, the Orthographic projection doesn't have any "Angle of View", this is because it doesn't need. As you remember from the orthographic project, it make everything seems equal, the units are always squared, in other words, orthographic projection is a linear projection.</p>

<p>The code above show to us what we imagine before. The projection matrix will affect slightly the rotations (X, Y and Z), affect directly the scales (the major diagonal) and more incisive on the vertex positions.</p>

<p>Now, let's see the perspective projection, a little more elaborated case:</p>

<table width="675">  
<tr>  
<th>Perspective Projection</th>  
</tr>  
</table>  

<pre class="brush:cpp">  
.
    // These paramaters are about lens properties.
    // The "near" and "far" create the Depth of Field.
    // The "angleOfView", as the name suggests, is the angle of view.
    // The "aspectRatio" is the cool thing about this matrix. OpenGL doesn't
    // has any information about the screen you are rendering for. So the
    // results could seem stretched. But this variable puts the thing into the
    // right path. The aspect ratio is your device screen (or desired area) width divided
    // by its height. This will give you a number < 1.0 the the area has more vertical
    // space and a number > 1.0 is the area has more horizontal space.
    // Aspect Ratio of 1.0 represents a square area.
    float near = 0.001, far = 100.0;
    float angleOfView = 45.0;
    float aspectRatio = 0.75;

    // Some calculus before the formula.
    float size = near * tanf(degreesToRadians(angleOfView) / 2.0); 
    float left = -size, right = size, bottom = -size / aspectRatio, top = size / aspectRatio;

    // First Column
    matrix[0] = 2 * near / (right - left);
    matrix[1] = 0.0;
    matrix[2] = 0.0;
    matrix[3] = 0.0;

    // Second Column
    matrix[4] = 0.0;
    matrix[5] = 2 * near / (top - bottom);
    matrix[6] = 0.0;
    matrix[7] = 0.0;

    // Third Column
    matrix[8] = (right + left) / (right - left);
    matrix[9] = (top + bottom) / (top - bottom);
    matrix[10] = -(far + near) / (far - near);
    matrix[11] = -1;

    // Fourth Column
    matrix[12] = 0.0;
    matrix[13] = 0.0;
    matrix[14] = -(2 * far * near) / (far - near);
    matrix[15] = 0.0;
.
</pre>

<p>Understanding: Wow, the formula changes slightly, but now there are no effect on X and Y position, it just changes the Z position (depth). It continues affecting the rotation X, Y and Z, but there are a great interference on the Third Column, what is that? That is exactly the calculus about the modifications to produce the perspective and the factors to adjust the aspect ratio. Note that the last element of the third column is negative. This will inverse the stretches of the aspect ratio when the final matrix was generated (multiplied).</p>

<p>Now talking about the final matrix. This is a very important step. Unlike the other matrices multiplication, at this time you can't change the order, otherwise you'll get unexpected results. This is what you need to do:</p>

<p>Take the camera View Matrix (an inverted matrix containing the rotations and translations of the camera) and POST-Multiply it by the Projection Matrix: <br />
<strong>PROJECTION MATRIX  x  VIEW MATRIX</strong>. <br />
Remember, this will produce the effect as: "Do the VIEW MATRIX first and then do PROJECTION MATRIX".</p>

<p>Now you have what we are used to call VIEW<em>PROJECTION MATRIX. Using this new matrix you will POST-Multiply the MODEL MATRIX (matrix containing all the rotations, scales and translations of an object) by the VIEW</em>PROJECTION MATRIX: <br />
<strong>VIEW<em>PROJECTION MATRIX x MODEL MATRIX</strong>. <br />
Again, just to reinforce, that means: "Do the MODEL MATRIX first and then do VIEW</em>PROJECTION MATRIX". Finally! Now you have what is called MODEL<em>VIEW</em>PROJECTION MATRIX!</p>

<p>CONGRATULATIONS! <br />
Gasp! Oof!</p>

<p>OK, I know what you are thinking...  "WTF! All this just to produces a simple stupid matrix!" Yeh, I thought the same. Doesn't has a more simple or fast way to do that?</p>

<p>Well, I think the answer to that questions is on the conclusion of this article. Let's go!</p>

<p><br/><a name="conclusion"></a> <br />
<h2><strong>Conclusion</strong></h2><a href="#list_contents">top</a> <br />
Henceforward, the things will become much more complex than this. Matrices and Quaternions is just the beginning of the journey to construct a 3D engine or personal framework. So if you didn't take a decision yet, maybe is time to take one. I think you have two choices:</p>

<ol>  
    <li>You could construct a Framework/Engine by yourself.</li>
    <li>You could take an existing Framework/Engine and learn how to use it.</li>
</ol>

<p>Just as any "choice", both have positives and negatives points. You need to think and decide what is better for your purposes. I think a third choice, construct a little "template" to use in your projects could not be a good one, so personally I discard this option as a choice. There are so many things to be done in the 3D world that probably we would be crazy or very lost trying to fit everything into one or few templates. So my last advice is: "Take a choice".</p>

<p>Anyway, the next tutorial I'll make will be more advanced. For now, let's revise everything of this tutorial:</p>

<ul>  
    <li>Cameras could has a Convex or Concave Lens. Cameras also has some properties like Depth of Field, Angle of View, Near and Far.</li>
    <li>In 3D world we can work with a real projection called Perspective and an unreal projection called Orthographic.</li>
    <li>The Camera should work as the inverse of a normal 3D object.</li>
    <li>We never change the original structure of a 3D object, we just take the result of a temporary change.</li>
    <li>Those changes can be made using matrices. We have formulas to rotate, scale and translate a matrix. We also has the Quaternions to work with the rotations.</li>
    <li>We also use a matrix to create the camera's lens using a Perspective or Orthographic projection.</li>
</ul>

<p>This is all for now. <br />
Thanks for reading, see you in the next tutorial!</p>

<iframe scrolling="no" src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  width="100%" height="130px"></iframe>]]></description><link>http://blog.db-in.com/cameras-on-opengl-es-2-x/</link><guid isPermaLink="false">d337e710-ad5b-44cc-9c1f-b0b4b3564a7a</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Tue, 04 Feb 2014 01:48:06 GMT</pubDate></item><item><title><![CDATA[Khronos EGL and Apple EAGL]]></title><description><![CDATA[<p><img src='http://db-in.com/images/egl_eagl_apis.jpg'  alt="" title="Binary world" width="200" height="200" class="alignleft size-full" />Hi again everybody!</p>

<p>This will be a little article to give support to my full tutorial about OpenGL. In this article I'll talk about the EGL API and the EAGL (the EGL API implemented by Apple). We'll see how to set up an OpenGL's application to comunicate with the device's windowing system.</p>

I'll focus on Objective-C and iOS, but I'll talk about the setup in others devices and languages too. So let's start!  
<!--more-->  

<p><br/>  </p>

<h2><strong>At a glance</strong></h2>  

<p>As I said in the first part of my full tutorial about OpenGL (<a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-1'  target="blank">click here to see</a>), OpenGL is not responsible by manage the windowing system of each device that support it. OpenGL relinquished this responsability. So, to make the bridge between OpenGL's render output and the device's screen, Khronos group created the EGL API.</p>

<p>Remember that EGL can be modified by the vendors to fit exactly what they need to their devices and windowing systems. So my big advice in this article is: ALWAYS CONSULT THE EGL INSTRUCTIONS FROM YOUR VENDORS.</p>

<p><br/>  </p>

<h2><strong>Setup EGL API</strong></h2>  

<p>OK, now we'll enter in a EGL's comum area, independently to the vendors's implementation, the EGL's logics is always needed.</p>

<p>The first thing that EGL needs to know is where we want to display our content, normally it is done with the function <strong>eglGetDisplay</strong>, which returns a EGLDisplay data type. A constant EGL<em>DEFAULT</em>DISPLAY is always implemented by the vendors to return their default display. Right after this you call <strong>eglInitialize</strong> to initialize the EGL API, this function will return a boolean data type to inform the status. So normally you start with the code:</p>

<pre class="brush:csharp">  
EGLDisplay display;

display = eglGetDisplay(EGL_DEFAULT_DISPLAY);

if (eglInitialize(display, NULL, NULL))  
{
    // Proceed with your code.
}
</pre>

<p>The NULL, NULL parameters are pointers, which you can get the major and minor version of the current EGL's implementation. I don't want to go deeply here, the important thing is understand this step. The EGL API need to know where is the display to initialize, just it!</p>

<p>OK, the next step is the configuration. Once EGL know the display, it can prepare the bridge between OpenGL's output and the device's screen. To start constructing this bridge, the EGL needs some configurations, which involve the color format, colors individually, the sizes, the transparency, the samples per pixel, pixel format and many others. By default EGL provide a lot of functions to this step, like <strong>eglGetConfigs</strong> or <strong>eglChooseConfigs</strong>. Here the interference of the vendors is more intense, because they need to provide a bunch of configurations about their systems and devices. I'll not describe it here, there are so many systems and languages. So again, always consult your vendors.</p>

<p>The last step is more simple. After all configurations, you instruct EGL API to create a render surface, this means, the surface on which your OpenGL's render output will be placed. You can choose between a On-Screen or a Off-Screen surface. The first means your render output will go directly to the device's screen, the Off-Screen means that your render output will go to a buffer or to a image file, a snapshot, or anything like that. You can define the surface by using <strong>eglCreateWindowSurface</strong> or <strong>eglCreatePbufferSurface</strong>, both will return an EGLSurface.</p>

<p>But here a little advice, if you want to place your render's output onto a Off-Screen surface, is better you use a frame buffer directly with OpenGL's API instead to create a EGL's PBuffer. Is faster and cost less for your application.</p>

<p>Are we ready now?</p>

<p>Not yet. This is just the platform's side of the bridge, EGL has the other side of the bridge, the OpenGL's side.</p>

<p><br/>  </p>

<h2><strong>EGL Context</strong></h2>  

<p>Until now, EGL knows all that it need about the windowing system, but now EGL needs to know all about our OpenGL usage. Deeply, EGL works with 2 frame buffers. Do you remember what is frame buffer from the first part of OpenGL's tutorial, right? (<a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-1/' #Buffers">click here if not</a>). The EGL API use these 2 frame buffer to place the OpenGL's render output into your desired surface.</p>

<p>To make it properly, EGL needs to know where are the OpenGL's buffers which you want to render. This step is very simple. All that we need is create a EGL context and make it the current context (yes, we can create many contexts). Once we made a context the current context, the subsequent Frame Buffer we use in OpenGL will be used by EGL context. Only one Frame Buffer can be used at time (the next part of OpenGL's tutorial will talk deeply about this). Usually, the EGL functions to context are:</p>

<table width="675">  
<tr>  
<th>EGL Context</th>  
</tr>  
<tr>  
<td><h5><strong>EGLContext eglCreateContext(EGLDisplay display, EGLConfig config, EGLContext shareContext, const EGLint* attribList)</strong></h5><br/>  
<ul>  
    <li><strong>display</strong>: The previously get display.</li>
    <li><strong>config</strong>: The previously set configuration.</li>
    <li><strong>shareContext</strong>: Usually is EGL_NO_CONTEXT to share no context.</li>
    <li><strong>attribList</strong>: To OpenGL ES, this parameter determines which version of OpenGL ES will be used. Value 1 represent a context of OpenGL ES 1.x and a value of 2 represent a context of OpenGL ES 2.x.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>EGLBoolean eglmakeCurrent(EGLDisplay display, EGLSurface draw, EGLSurface read, EGLContext context)</strong></h5><br/>  
<ul>  
    <li><strong>display</strong>: The display obtained previously.</li>
    <li><strong>draw</strong>: The surface obtained previously.</li>
    <li><strong>read</strong>: The same value as draw.</li>
    <li><strong>context</strong>: The context to be the current.</li>
</ul>  
</td>  
</tr>  
</table>

<p>Seems a little confused? Don't worry. I'm talking superficially about this. The point here is to you understand which an EGL's Context represent the OpenGL's side of the bridge. The most important advice I can give you is: Look for the vendors documentation of EGL API.</p>

<p><br/><a name="rendering"></a>  </p>

<h2><strong>Rendering with EGL Context</strong></h2>  

<p>Once the EGL knows about the windowing system and has a context to know about our OpenGL application, we can make our render's output be presented onto the desired surface.</p>

<p>The render step is very very simple. Do you remember I said EGL works with 2 internal frame buffers? Now is time to use that. All that we need is instruct the EGL to swap the buffers. Swap? Why?</p>

<p>While EGL present one frame buffer to your desired surface, the other one stay on back waiting for a new render output. The buffer on the back will be filled with all renders you did until the next call to EGL swap the buffers and when you do this, EGL brings the back buffer to the front and present the final render's output onto the desired surface, while the old front buffer goes to the back to reinitiate the process.</p>

<p>This technique is a great boost on our application's performance, because the our final surface will not be notified every time we execute a render command. The final surface will be notified only when we finish all the renders commands. Another improvement is because the buffer on the back will receive the render's outputs faster than a device's screen, for example.</p>

<p><img src='http://db-in.com/images/swap_buffers_example.jpg'  alt="EGL API swap the buffers to present the render onto the desired surface." title="swap_buffers_example" width="600" height="600" class="size-full wp-image-944" /></p>

<p>This is the function to swap the buffers and present the render output onto the desired surface.</p>

<table width="675">  
<tr>  
<th>Swap the EGL's buffers</th>  
</tr>  
<tr>  
<td><h5><strong>EGLBoolean eglSwapBuffers(EGLDisplay display, EGLSurface surface)</strong></h5><br/>  
<ul>  
    <li><strong>display</strong>: The display obtained previously.</li>
    <li><strong>surface</strong>: The surface obtained previously.</li>
</ul>  
</td>  
</tr>  
</table>

<p>OK, this is all about the EGL making the bridge between the OpenGL's core and the windowing systems. I know that seems many steps to just output a simple render. But look from far, is just a little setup:  </p>

<ol>  
    <li>Initialize the EGL API with the display and surface given by your vendor.</li>
    <li>Make the configurations to your desired surface.</li>
    <li>Create a context and make it your current context.</li>
    <li>Ask for your EGL context to swap its internal buffers.</li>
</ol>

<p>Now I'll talk about EGL implementation by Apple, which has a lot of changes from the original, many changes!</p>

<p><br/>  </p>

<h2><strong>EAGL - The Apple's EGL API</strong></h2>  

<p><img src='http://db-in.com/images/egl_eagl_apis.jpg'  alt="" title="egl_eagl_apis" width="300" height="196" class="alignright size-medium wp-image-978" />The EAGL (which we are used to pronounce Eagle) is the implementation by Apple of EGL API. The most important thing to know about EAGL is which the Apple doesn't allow us to render directly to the screen. We must to work with frame and render buffers.</p>

<p>But why the Apple changed EGL so much? Well, you know the Apple's approach, Cocoa Framework is almost like another language, it has infinities rules, hard rules! If you change the place of one little screw, all the framework will down! So the Apple made that changes to fit the EGL API into the Cocoa's rules.</p>

<p>Using EAGL, we must to create a color render buffer and draw our render's output into it. Plus, we must to bind that color render buffer to a special Apple's layer called CAEAGLLayer. And finally to make a render, we need to ask the context to present that color render buffer. In deeply, this command will swap the buffer just like original EGL context, but our code changes a lot. Let's see.</p>

<p><br/>  </p>

<h2>Setup EAGL API</h2>

<p>The first thing we need to do is create a sub-class from UIVIew. Inside this new class, we need to change its Core Animation layer. Doing this is equivalent in the original EGL API to initialize the API, take the display from the windowing system and define our surface. So, in this equivalence we'll have <strong>EGLDisplay = the UIWindow which has our custom view</strong> and <strong>EGLSurface = our sub-class from UIView</strong>.</p>

<p>Our code will be like this:</p>

<pre class="brush:csharp">  
#import &lt;UIKit/UIKit.h&gt;

@interface CustomView : UIView

@end

@implementation CustomView

// Overrides the layerClass from UIView.
+ (Class) layerClass
{
    // Instead to return a normal CALayer, we return the special CAEAGLLayer.
    // Inside this CAEAGLLayer the Apple did the most important part
    // of their custom EGL's implementation.
    return [CAEAGLLayer class];
}

@end
</pre>

<p>Now we need to make the step of configure the EGL API defining the color format, the sizes and others. In EAGL we do this step by setting the properties of our CAEAGLLayer. So let's code:</p>

<pre class="brush:csharp">  
@implementation CustomView

...

- (id) initWithFrame:(CGRect)frame
{
    if ((self = [super initWithFrame:frame]))
    {
        // Assign a variable to be our CAEAGLLayer temporary.
        CAEAGLLayer *eaglLayer = (CAEAGLLayer *)[self layer];

        // Construct a dictionary with our configurations.
        NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:NO], kEAGLDrawablePropertyRetainedBacking, 
                             kEAGLColorFormatRGB565, kEAGLDrawablePropertyColorFormat, 
                             nil];

        // Set the properties to CAEALLayer.
        [eaglLayer setOpaque:YES];
        [eaglLayer setDrawableProperties:dict];

        // Other initializations...
    }

    return self;
}

...

@end
</pre>

<p><br/>  </p>

<h2>EAGL Context</h2>

<p>Following the EGL API steps, the next one is about the context. Here again the Apple changes almost all to fit the EGL API into their Cocoa Framework. The EAGL API gives to us only two Objective-C classes, one of then is EAGLContext. The EAGLContext is equivalent to EGLContext.</p>

<p>Here I have to admit, even changing everything the Apple made this step easer than EGL API. All that you need is allocate a new instance of EAGLContext initializing it with your desired OpenGL ES's version (1.x or 2.x) and call a method to make that context the current one. Here you don't have to inform again the EGLDisplay, the EGLSurface, the EGLContext and the others annoying parameters as in the origial EGL API.</p>

<pre class="brush:csharp">  
@interface CustomView : UIView
{
    EAGLContext *_context;
}

@end

@implementation CustomView

...

- (id) initWithFrame:(CGRect)frame
{
    if ((self = [super initWithFrame:frame]))
    {
        // Other initializations...

        // Create a EAGLContext using the OpenGL ES 2.x.
        _context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

        // Make our new context, the current context.
        [EAGLContext setCurrentContext:_context];
    }

    return self;
}

...

@end
</pre>

<p><br/><a name="renderingeagl"></a>  </p>

<h2>Rendering with EAGL Context</h2>

<p>Next step! <br />
By following the above EGL's steps the next one is the swap the buffers (rendering). But here the Apple changed the thing a lot. As I said before, to use EAGL we must to create at least a color render buffer, this necessarily needs a frame buffer too. So in the reality the next step should be the buffers creations. I could describe this step here, but as this article is just a middle point between the first and the second part of my full tutorial, describe it is not my intention here. I'll let this discussion to the next part of that tutorial.</p>

<p>So let's go to the final EAGL's step, the rendering. <br />
I told you Apple uses a color render buffer instead to directly swap the context's buffers. By doing this, Apple simplified the last step, the swap of the buffers. Instead to swap the buffers informing the desired display and surface, we just ask our EAGLContext to present its render buffer.</p>

<pre class="brush:csharp">  
@interface CustomView : UIView
{
    EAGLContext *_context;
}

- (void) makeRender;

@end

@implementation CustomView

...

- (void) makeRender
{
    [_context presentRenderbuffer:GL_RENDERBUFFER];
}

...

@end
</pre>

<p>The param GL_RENDERBUFFER to <strong>presentRenderbuffer</strong> is a constant and is just for internal usage. This call should occurs just after we perform all our changes to our 3D objects, like translates or rotations.</p>

<p>Well, this is all the basic instructions about EGL and EAGL APIs. Obviously using EAGL, you could take a more OOP approach, by separating your custom UIView of the other class which hold the EAGLContext instance and makes the final presentation.</p>

<p>Now, as we are used to, let's remember the steps using the EAGL.  </p>

<ol>  
    <li>Make a sub-class from UIView and override the layerClass method to return an instance of CAEAGLLayer;</li>
    <li>Set up the properties to the CAEAGLLayer;</li>
    <li>Create an instance from EAGLContext;</li>
    <li>Present a color render buffer onto the screen by using the EAGLContext.</li>
</ol>

<p><br/>  </p>

<h2><strong>Conclusion</strong></h2>  

<p>Well done, my friends. <br />
Think in this article like an abstract class! Is a little step, but is also so much annoying to cover this in the second part of my OpenGL's tutorial.</p>

<p>If you were to save only a single sentence of this entire article, the phrase would be: "Consult the EGL instruction from your vendors!". The vendors don't interfere inside OpenGL, so I can talk about it independently from the system or language. But the EGL API is the bridge between a unified OpenGL API and all the systems which support OpenGL. So the interference from the vendors on the EGL is bigger! Just like the Apple did!</p>

<p>Thanks for reading!</p>

<p>See you in the next part of OpenGL's tutorial!</p>

<iframe scrolling="no" src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  width="100%" height="130px"></iframe>]]></description><link>http://blog.db-in.com/khronos-egl-and-apple-eagl/</link><guid isPermaLink="false">cd41545b-160f-47db-a34b-f21fa58deaeb</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Tue, 04 Feb 2014 01:47:05 GMT</pubDate></item><item><title><![CDATA[All about OpenGL ES 2.x - (part 3&#x2F;3)]]></title><description><![CDATA[<p><img src='http://db-in.com/images/opengl_part3.png'  alt="" title="opengl_part3" width="300" height="352" class="alignright size-full" /> <br />
Here we are for the final part of this serie of tutorial! <br />
Welcome back, my friends.</p>

<p>Is time to dive in advanced knowledges about OpenGL and 3D world. In this tutorial we'll see many things about the 2D graphics, multisampling, textures, render to off-screen surfaces and let's try to optimize at maximum our applications' performance.</p>

<p>Is very important to you already know about all the concepts covered in the other 2 parts of this serie, if you miss something, here is a list:</p>

<p>This serie is composed by 3 parts:  </p>

<ul>  
    <li><a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-1'  target="_blank">Part 1 - Basic concepts of 3D world and OpenGL (Beginners)</a></li>
    <li><a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-2'  target="_blank">Part 2 - OpenGL ES 2.0 in-depth (Intermediate)</a></li>
    <li>Part 3 - Jedi skills in OpenGL ES 2.0 and 2D graphics (Advanced)</li>
</ul>  

<!--more-->

<p>At this time I imagine you know a lot of things about OpenGL and 3D world. You probably already created some applications using OpenGL, discovered many cool things, found some problems, even maybe your own engine/framework is under construction and I'm very glad in see you coming back.</p>

<p>As I've read once in a book: "One day you didn't know to walk. Then you learned how to stand up and walk. Now is time to run, jump and swim!"... and why not "to fly". With OpenGL our imagination has no limits, we can fly.</p>

<p>Let's start.</p>

<p><a name="list_contents"></a> <br />
Here is a little list of contents to orient your reading:  </p>

<table width="675">  
<tr>  
<th colspan=2>List of Contents to this Tutorial</th>  
</tr>  
<tr><td valign="top">  
<ul>  
    <li><a href="#2d_graphics">2D graphics with OpenGL</a></li>
    <li><a href="#grid">The Grid Concept</a></li>
    <li><a href="#depth_buffer_2d">The Depth Render Buffer in 2D</a></li>
    <li><a href="#cameras_2d">The Cameras with 2D</a></li>
    <li><a href="#multisampling">The Multisampling</a></li>
    <li><a href="#more_texture">More About Textures</a>
        <ul>
            <li><a href="#bpp">Bytes per Pixel</a></li>
            <li><a href="#pvrtc"> PVRTC</a></li>
        </ul></li>
    <li><a href="#tips_tricks">Tips and Tricks</a>
        <ul>
            <li><a href="#cache">The Cache</a></li>
            <li><a href="#store_values"> Store the Values</a></li>
            <li><a href="#c_fastest"> C is Always the Fastest Language</a></li>
        </ul></li>
    <li><a href="#conclusion"> Conclusion</a></li>
</ul>  
</td></tr>  
</table>

<p><br/><h2><strong>At a glance</strong></h2> <br />
Remembering everything until here:  </p>

<ol>  
    <li>OpenGL’s logic is composed by just 3 simple concepts: Primitives, Buffers and Rasterize.</li>
    <li>OpenGL ES 2.x works with programmable pipeline, which is synonymous of Shaders.</li>
    <li>OpenGL isn't aware of the output device, platform or output surface. To make the bridge between OpenGL's core and our devices, we must use EGL (or EAGL in iOS).</li>
    <li>The textures are crucial and should have a specific pixel format and order to fit within OpenGL.</li>
    <li>We start the render process by calling <strong>glDraw*</strong>. The first steps will pass through the Vertex Shader, several checks will conclude if the processed vertex could enter in the Fragment Shader.</li>
    <li>The original structure of our meshes should never change. We just create transformations matrices to produce the desired results.</li>
</ol>

<p>First I'll talk about 2D graphics, then let's see what is the multisampling/antialias filter, personally I don't like the cost x benefit of this kind of technique. Many times an application could run nicely without multisampling, but a simple multisampling filter can completely destroy the performance of that application. Anyway, sometimes, it's really necessary to make temporary multisampling to produce smooth images.</p>

<p>Later I'll talk about textures in deep and its optimization 2 bytes per pixel data format. Also let's see PVRTC and how to place it in an OpenGL's texture, besides render to an off-screen surface.</p>

<p>And finally I'll talk a briefly about some performances gains that I discovered by my self. Some tips and tricks which really help me a lot today and I want to share this with you.</p>

<p>Let's go!</p>

<p><br/><a name="2d_graphics"></a>  </p>

<h2><strong>2D graphics with OpenGL</strong></h2><a href="#list_contents">top</a>  
Using 2D graphics with OpenGL is not necessary limited to the use of lines or points primitives. The three primitives (triangles, lines and points) are good to use with 3D and 2D. The first thing about 2D graphics is the Z depth. All our work becomes two dimensions, excluding the Z axis to translations and scales and also excluding X and Y to rotations. It implies that we don't need to use the Depth Render Buffer anymore, because everything we draw will be made on the same Z position (usually the 0.0).

A question comes up: "So how the OpenGL will know which object should be drawn on the front (or top) of the other ones?" It's very simple, by drawing the objects in the order that we want (objects at the background should be drawn first). OpenGL also offers a feature called Polygon Offset, but it's more like an adjustment than a real ordering.

OK, now we can think in 2D graphics at three ways:  

<ol>  
    <li>Many squares on the same Z position.</li>
    <li>Many points on the same Z position.</li>
    <li>All above.</li>
</ol>  

<img class="size-full wp-image-1372" title="2d_orthographic_example" src='http://db-in.com/images/2d_orthographic_example.jpg'  alt="This is how a 2D graphics looks like for OpenGL using squares." width="600" height="497" />

<img class="size-full wp-image-1373" title="2d_scene_example" src='http://db-in.com/images/2d_scene_example.jpg'  alt="How a 2D scene will appear on the screen." width="600" height="497" />

You could imagine how easy it's to OpenGL, a state machine prepared to work with millions of triangles, to deal with those few triangles. In extreme situations, 2D graphics  works with hundreds of triangles.

With simple words, everything will be textures. So most of our work with 2D will be on the textures. Many people feel compelled to create an API to work with texture Non-POT, that means, work with texture of dimensions like 36 x 18, 51 x 39, etc. My advice here is: "Don't do that!". It's not a good idea to work with 2D graphics using Non-POT textures. As you've seen in the last image above, it's always a good idea work with a imaginary grid, which should be POT, a good choice could be 16x16 or 32x32.

If you are planing to use PVRTC compressed image files, could be good to use a grid of 8x8, because PVRTC minimum size is 8. I don't advice make grids less than 8x8, because it's unnecessary precise and could increase your work developing, also it compromise your application's performance. Grids with 8x8 size are very precise, we'll see soon the diferences between the grids and when and how to use them. Let's talk a little bit more about the grid.

<br/><a name="grid"></a>  
<h2><strong>The Grid Concept</strong></h2><a href="#list_contents">top</a>  
I think this is the most important part in the planing of a 2D application. In a 3D game, for example, to determine where a character can walk we must create a collision detector. This detector could be a box (bounding box) or a mesh (bounding mesh, a simple copy from the original). In both cases, the calculations are very important and expensive. But in a 2D application it's very very easy to find the collisions areas if you are using a grid, because you have only a square area using X and Y coordinates!

This is just one reason because the grid is so important. I know you can come up with many other advantages of the grid, like the organization, the precision on the calculations, the precision of the objects on the screen, etc. About 10 year ago (or maybe more) I worked with a tiny program to produce RPG games with 2D graphics. The idea of grid was very well stablished there. The following images show how everything can be fitted into the grid:

<img class="size-medium wp-image-1374" title="rpg_maker_example1" src='http://db-in.com/images/rpg_maker_example1.jpg'  alt="Grid in the RPG Maker. (click to enlarge)" width="300" height="227" />

I know, I know... it's the "Fuckwindons" system, sorry for that... as I told you, it was a decade ago... OK, let's understand the important features about the grid. You can click on the side image to enlarge it. The first thing I want you notice is the about the overlap of the images. Notice that the left side of the window is reserved to a kind of library. At the top of this library you can see little square images (32x32 in this case). Those squares are reserved to the floor of the scenario (in our OpenGL language, it would be the background). The other images in that library are transparent images (PNG) wich can be placed on top of the floor squares. You can see this difference looking at the big trees placed on the grid.

Now find the "hero" on the grid. He's at the right side near a tree, it shows a little face with redhead inside a box. This is the second important point about the grid. That hero doesn't occupies only one little square on the grid, he could be bigger, but to the grid, the action delimitator represents only one square. Confused? Well, it's always a good idea to use only one grid's square to deal with the actions, because it let your code much more organized than other approaches. To create areas of actions you can duplicate the action square, just like the exit of the the village on the right top area of the grid in this side image.

I'm sure you can imagine how easy is to create a control class to deal with the actions in your 2D application and then create view classes referencing the control classes, so you can prepare the view classes to get the collision detection on many squares on the grid. So you have 1 action - N grid squares detectors. At this way you can take all advantages of the grid and also an incredible boost in your application's performance.

By using the grid you can easy define the collision areas which are impossible to the character pass through, like the walls. Another great advantage of using the grid is define "top areas", that means, areas which always will be drawn on top, like the up side of the trees. So if the character pass through these areas, he will be displayed behind.

The following image shows a final scene which uses all these concepts of the grid. Notice how many images can be overlapped by others, pay attention of how the character deals with the action square and its own top areas. And notice the most top effects overlapping everything, like the clouds shadows or the yellow light coming from the sun.

<img class="size-full wp-image-1375" title="rpg_maker_example2" src='http://db-in.com/images/rpg_maker_example2.jpg'  alt="RPG Maker final scene." width="600" height="450" />

Summarizing the points taken, the grid is really the most important part of planning a 2D application. The grid is not a real thing in OpenGL, so you have to be careful about using this concept, because everything will be imaginary. Well, just to let you know, an extra information: the grid concept is so important that the OpenGL internally works with a grid concept to construct the Fragments.

Great, this is everything about the grid. Now you would say: "OK, but this is not what I wanted, I want a game with a projection like Diablo, Sin City or even the We Rule does!". Oh right, so let's make things more complex and bring back the Depth Render Buffer and Cameras to our 2D application.

<br/><a name="depth_buffer_2d"></a>  
<h2><strong>The Depth Render Buffer in 2D</strong></h2><a href="#list_contents">top</a>  
Knowing how the 2D graphics goes with OpenGL, we can think in more refined approach, like use the Depth Buffer even in 2D applications.

<img class="size-medium wp-image-1376" title="grid_depth_example" src='http://db-in.com/images/grid_depth_example.jpg'  alt="&quot;2D game&quot; using OpenGL with Depth Render Buffer. (click to enlarge)" width="310" height="205" />

<img class="size-medium wp-image-1377" title="grid_normal_example" src='http://db-in.com/images/grid_normal_example.jpg'  alt="&quot;2D game&quot; using OpenGL without Depth Render Buffer. (click to enlarge)" width="310" height="205" />

By clicking on the images above, you can notice the difference between them. Both screenshot are from famous games from iOS, both use OpenGL and both are known as 2D games. Although they use OpenGL ES 1.1, we can understand the concept of Grid + Depth Render Buffer. The game on the left (<em>Gun Bros</em>) makes use of a very small grid, exactly 8x8 pixels, this kind of grid gives to the game a incredible precision to place the objects on the grid, but to improve the user experience you need to make a set of grid squares to deal with the actions, in this case a good choice could be arranging 4 ou 8 grid squares to each action detector. Now the game on the right, it's called <em>Inotia</em>, by the way, Inotia is today in its 3th edition. Since the first edition, Inotia always used a big grid, 32x32 pixels. As Gun Bros, Inotia uses OpenGL ES 1.1.

There are many differences between those two grid types (8x8 and 32x32). The first one (8x8) is much more precisely and could seem to be the best choice, but remember that this choice will increase too much your processing. The Inotia game has a light processing demand, something absolutely unimpressive to the iOS' hardwares. You need to make the best choice to fit within the application you are planning to use.

Now, talking about the Depth Render Buffer, the great thing about it is that you can use 3D models in your application. Look, without Depth Render Buffer you must use only squares, or another primitive geometric form, with textures. By doing this you must create one different texture to each position of your animation, specially to characters animation, obviously a great idea is make use of texture atlas:

<img class="size-medium wp-image-1378" title="texture_atlas_example" src='http://db-in.com/images/texture_atlas_example.png'  alt="Character texture atlas from Ragnarok." width="169" height="300" />

The image of Inotia game above has a similar texture altas to each character that appear in the game. Looking at that image you can see that the three character on the screen can turn just to four directions. Now, take another look to the Gun Bros image above.

Notice that the characters can turn to all directions. Why? Well, by using a Depth Render Buffer you are free to use 3D models in your 2D application. So you can rotate, scale and translate the 3D models respecting the grid and the 2D concepts (no Z translate). The result is much much better, but as any improvements, it has a great cost for performance compared to 2D squares, of couse.

But there is another important thing about mixing 3D and 2D concepts: the use of cameras. Instead of creating a single plane right in front of the screen, locking the Z translations, you can create a great plane along the Z axis, place your objects just as in a 3D application and create a camera with orthographic projection. You remember what it is and how to do it, right? (<a href='http://blog.db-in.com/cameras-on-opengl-es-2-x/' #projections" target="_blank">click here to check the article about cameras</a>).

Before going further into the cameras and Depth Render Buffer with 2D graphics, it's important to know that at this point it has no real difference, at the code level, between 2D and 3D graphics since everything comes from your own planning and organization. So the code to use the Depth buffer is the same we saw in the last part (<a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-2/' #render_buffers" target="_blank">click here to see the last tutorial</a>).

Now let's talk about the cameras with 2D graphics.

<br/><a name="cameras_2d"></a>  
<h2><strong>The Cameras with 2D</strong></h2><a href="#list_contents">top</a>  
OK, I'm sure you know how to create a camera and an orthographic projection now, as you've seen it at the tutorial about cameras, right? (<a href='http://blog.db-in.com/cameras-on-opengl-es-2-x/' #projections" target="_blank">click here to check the cameras tutorial</a>). Now a question comes up: "Where is the best place and approach to use cameras and depth render buffer in the 2D graphics?". The following image could help more than 1.000 words:

<img class="size-medium wp-image-1379" title="cameras_projection_example" src='http://db-in.com/images/cameras_projection_example.jpg'  alt="Same camera in both projections. (click to enlarge)" width="300" height="240" />

This image shows a scene like Diablo Game style, with a camera in the similar position. You can notice clearly the difference between both projections. Notice the red lines across the picture, with the Orthographic projection those lines are parallels, but with the Perspective projection those lines are not really parallels and can touch at the infinity.

Now focus on the grey scale picture at the bottom right. That is the scene with the objects. As you can see, they are really 3D objects, but with an orthographic projection you can create scenes like Diablo, Sim City, Starcraft and other best sellers, giving a 2D look to your 3D application.

If you take another look at that image of Gun Bros game, you can see that it's exactly what they do, there is a camera with orthographic projection and real 3D objects placed on the scene.

So the best approach is to create a camera in your desired position, construct all your scene in a 3D world, set the camera to use orthographic projection and guide your space changes by using the grid concept.

<img class="size-full wp-image-1380" title="cameras_grid_example" src='http://db-in.com/images/cameras_grid_example.jpg'  alt="The grid concept is very important even with cameras and Depth Render Buffer." width="600" height="447" />

I have one last advice about this subject, well... it's not really an advice, it's more like a warning. Perspective and Orthographic projections are completely different. So the same configuration of focal point, angle of view, near and far produces completely different results. So you need to find a configuration to the Orthographic projection different of that you were using with Perspective projection. Probably if you have a perspective projection which is working, when you change to orthographic projection you won't see anything. This is not a bug, it's just the differences between perspective and orthographic calculations.

OK, these are the most important concepts about 2D graphics with OpenGL. Let's make a little review of them.  

<ul>  
    <li>There are two ways of using 2D graphics with OpenGL: by using or not the Depth Render Buffer.</li>
    <li>Without Depth Render Buffer you can construct everything like a rectangle on the screen, forget the Z axis position at this way. Your job here will be laborious on textures. Although, by this way, you can have the best performance with OpenGL.</li>
    <li>By using the Depth Render Buffer you can use real 3D objects and probably you will want to use a camera with orthographic projection.</li>
    <li>Independent of the way you choose, always use the Grid concept when working on 2D graphics. It's the best way to organize your world and optimize your application performance.</li>
</ul>  

Now is time to go back into the 3D world and talk a little about the multisampling and antialias filter.

<br/><a name="multisampling"></a>  
<h2><strong>The Multisampling</strong></h2><a href="#list_contents">top</a>  
I'm sure you already noticed that every 3D application, which have a real time render, have their objects' edges aliased. I'm talking about 3D world in general, like 3D softwares or games, whatever... the edges always (I mean, in majority of the cases) looks like it's kind aliased. That doesn't happen due a lack of well developed techniques to fit that but rather it's because our hardwares are not so powerful yet to deal with pixel blend in real time too fast.

So, the first thing I want to say about the Anti-Alias filter is: "it's expensive!". In the majority of cases this little problem (aliased edges) doesn't matter. But there are some situations that your 3D application needs to looks better. The most simple and common example is the render in 3D softwares. When we hit the render button on our 3D software we expect to see gorgeous images, not jagged edges.

With simple words, the OpenGL primitives get rasterized onto a grid (yes, like our grid concept), and their edges may become deformed. OpenGL ES 2.0 supports something called <em>multisampling</em>. It's a technique of Anti-Alias filter which each pixel is divided into few samples, each of these samples are treated like a mini-pixel in the rasterization process. Each sample has its own information about color, depth and stencil. When you ask the OpenGL for the final image on the Frame Buffer, it will resolve and mix all the samples. This process produces more smooth edges. OpenGL ES 2.0 is always configured to multisampling technique, even if the number of samples is equal 1, that means 1 pixel = 1 sample. Looks very simple in theory, but remember that OpenGL doesn't know anything about the device's surface, consequently, anything about device's pixel and color.

The bridge between OpenGL and the device is made by the EGL. So the device color informations, pixel informations and surface informations are responsibility of EGL and consequentially the multisampling could not be implemented only by the OpenGL, it needs a plugin, which is responsibility of the vendors. Each vendors must create a plugin to EGL instructing about the necessary information, by doing this the OpenGL can really resolve the multi samples. The default EGL API offers a multisampling configuration, but commonly the vendors make some changes on it.

In the case of Apple, this plugin is called "Multisample APPLE" and it's located at the OpenGL Extensions Header (glext.h). To correctly implement the Apple Multisample you need 2 Frame Buffer and 4 Render Buffer! 1 Frame Buffer is the normal provided by OpenGL, the another one is the Multisample Framebuffer. The Render Buffers are Color and Depth.

There are three new functions in the glext.h to deal with Multisample APPLE:


<table width="675">  
<tr>  
<th>Multisample APPLE</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glRenderbufferStorageMultisampleAPPLE(GLenum target, GLsizei samples, GLenum internalformat, GLsizei width, GLsizei height)</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: The target always will be <strong>GL_RENDERBUFFER</strong>, this is just an internal convention for OpenGL.</li>
    <li><strong>samples</strong>: This is the number of samples which the Multisample filter will make.</li>
    <li><strong>internalformat</strong>: This specifies what kind of render buffer we want and what color format this temporary image will use. This parameter can be:
<ul>  
    <li><strong>GL_RGBA4</strong>, <strong>GL_RGB5_A1</strong>, <strong>GL_RGB56</strong>, <strong>GL_RGB8_OES</strong> or <strong>GL_RGBA8_OES</strong> to a render buffer with final colors.</li>
    <li><strong>GL_DEPTH_COMPONENT16</strong> or <strong>GL_DEPTH_COMPONENT24_OES</strong> to a render buffer with Z depth.</li>
</ul></li>  
    <li><strong>width</strong>: The final width of a render buffer.</li>
    <li><strong>height</strong>: The final height of a render buffer.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glResolveMultisampleFramebufferAPPLE(void)</strong></h5><br/>  
<ul>  
    <li>This function doesn't need any parameter. This function will just resolve the last two frame buffer bound to <strong>GL_DRAW_FRAMEBUFFER_APPLE</strong> and <strong>GL_READ_FRAMEBUFFER_APPLE</strong>, respectively.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glDiscardFramebufferEXT(GLenum target, GLsizei numAttachments, const GLenum *attachments)</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: Usually the target will be <strong>GL_READ_FRAMEBUFFER_APPLE</strong>.</li>
    <li><strong>numAttachments</strong>: The number of Render Buffer attachments to discard in the target Frame Buffer. Usually this will be 2, to discard the Color and Depth Render Buffers.</li>
    <li><strong>attachments</strong>: A pointer to an array containing the type of Render Buffer to discard. Usually that array will be <strong>{GL_COLOR_ATTACHMENT0, GL_DEPTH_ATTACHMENT}</strong>.</li>
</ul>  
</td>  
</tr>  
</table>


Before checking the code, let's understand a little bit more about these new functions. The first function (<strong>glRenderbufferStorageMultisampleAPPLE</strong>) is intended to replace that function that set the properties to the Render Buffer, the <strong>glRenderbufferStorage</strong>. The big new in this function is the number of samples, it will define how many samples each pixel will has.

The second one (<strong>glResolveMultisampleFramebufferAPPLE</strong>) is used to take the informations from the original frame buffer, place it in the Multisample Frame Buffer, resolve the samples of each pixel and then draw the resulting image to our original Frame Buffer again. In simple words, this is the core of Multisample APPLE, this is the function which makes all the job.

The last one (<strong>glDiscardFramebufferEXT</strong>) is another clearing function. As you imagine, after the <strong>glResolveMultisampleFramebufferAPPLE</strong> makes all the processing, the Multisample Frame Buffer will be with many informations in it, so it's time to clear all that memory. To do that, we call the <strong>glDiscardFramebufferEXT</strong> informing what we want to clear from where.

Now, here is the full code to use Multisample APPLE:


<table width="675">  
<tbody>  
<tr>  
<th>Multisample Framebuffer APPLE</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">.  
// EAGL
// Assume that _eaglLayer is a CAEAGLLayer data type and was already defined.
// Assume that _context is an EAGLContext data type and was already defined.

// Dimensions
int _width, _height;

// Normal Buffers
GLuint _frameBuffer, _colorBuffer, _depthBuffer;

// Multisample Buffers
GLuint _msaaFrameBuffer, _msaaColorBuffer, _msaaDepthBuffer;  
int _sample = 4; // This represents the number of samples.

// Normal Frame Buffer
glGenFramebuffers(1, &amp;_frameBuffer);  
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);

// Normal Color Render Buffer
glGenRenderbuffers(1, &amp;_colorBuffer);  
glBindRenderbuffer(GL_RENDERBUFFER, _colorBuffer);  
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_eaglLayer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorBuffer);

// Retrieves the width and height to the EAGL Layer, just necessary if the width and height was not informed.
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &amp; _width);  
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &amp; _height);

// Normal Depth Render Buffer
glGenRenderbuffers(1, &amp;_depthBuffer);  
glBindRenderbuffer(GL_RENDERBUFFER, _depthBuffer);  
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _width, _height);  
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _depthBuffer);  
glEnable(GL_DEPTH_TEST);

// Multisample Frame Buffer
glGenFramebuffers(1, &amp;_msaaFrameBuffer);  
glBindFramebuffer(GL_FRAMEBUFFER, _msaaFrameBuffer);

// Multisample  Color Render Buffer
glGenRenderbuffers(1, &amp;_msaaColorBuffer);  
glBindRenderbuffer(GL_RENDERBUFFER, _msaaColorBuffer);  
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, _samples, GL_RGBA8_OES, _width, _height);  
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _msaaColorBuffer);

// Multisample Depth Render Buffer
glGenRenderbuffers(1, &amp;_msaaDepthBuffer);  
glBindRenderbuffer(GL_RENDERBUFFER, _msaaDepthBuffer);  
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, _samples, GL_DEPTH_COMPONENT16, _width, _height);  
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _msaaDepthBuffer);  
.</pre>

Yes, many lines to a basic configuration. Once all those 6 buffers have been defined, we also need to make the render by a different approach. Here is the necessary code:


<table width="675">  
<tbody>  
<tr>  
<th>Rendering with Multisample APPLE</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">.  
//-------------------------
//    Pre-Render
//-------------------------
// Clears normal Frame Buffer
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);  
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// Clears multisample Frame Buffer
glBindFramebuffer(GL_FRAMEBUFFER, _msaaFrameBuffer);  
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

//-------------------------
//    Drawing
//-------------------------
//...
// Draw all your content.
//...

//-------------------------
//    Render
//-------------------------
// Resolving Multisample Frame Buffer.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, _frameBuffer);  
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, _msaaFrameBuffer);  
glResolveMultisampleFramebufferAPPLE();

// Apple (and the khronos group) encourages you to discard
// render buffer contents whenever is possible.
GLenum attachments[] = {GL_COLOR_ATTACHMENT0, GL_DEPTH_ATTACHMENT};  
glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE, 2, attachments);

// Presents the final result at the screen.
glBindRenderbuffer(GL_RENDERBUFFER, _colorBuffer);  
[_context presentRenderbuffer:GL_RENDERBUFFER];
.</pre>

If you want to remember something about the EAGL (the EGL implementation by Apple), check it here: <a href='http://blog.db-in.com/khronos-egl-and-apple-eagl/'  target="_blank">article about EGL and EAGL</a>. OpenGL also offers some configurations to multisampling, <strong>glSampleCoverage</strong> and few configurations with <strong>glEnable</strong>. I'll not talk in deep about these configurations here, because I don't believe multisampling is a good thing to spend our time on. As I told you, the result is not a big deal, it's just a little bit more refined. In my opinion, the performance cost is too much compared to the final result:

<img src='http://db-in.com/images/multisample_result_example.jpg'  alt="Same 3D model rendered without and with Anti-Alias filter." title="multisample_result_example" width="600" height="470" class="size-full wp-image-1392" />

OK, now it's time to talk more about the textures in OpenGL.

<br/><a name="more_texture"></a>  
<h2><strong>More About Textures</strong></h2><a href="#list_contents">top</a>  
We already know many things about textures from the second part of this serie (<a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-2/' #textures" target="_blank">All About OpenGL ES 2.x - Textures</a>).  First, let's talk about the optimized types. It's a great boost on our application's performance and is very easy to implement. I'm talking about the bytes per pixel of our images.

<br/><a name="bpp"></a>  
<h2>Bytes per Pixel</h2><a href="#list_contents">top</a>  
Usually the images has 4 bytes per pixel, one byte for each channel, RGBA. Some images without alpha, like JPG file format, has only 3 bytes per pixel (RGB). Each byte could be represented by an hexadecimal color in the format 0xFF, it's called hexadecimal because each decimal house has the range 0 - F (0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F), so when you make a combination of two hexadecimal numbers you get one byte (16 x 16 = 256). As a convention, we describe a hexadecimal color as 0xFFFFFF, where each set of two number represent one color channel (RGB). For images with alpha channel, like PNG format, we are used to say 0xFFFFFF + 0xFF, that means (RGB + A).

My next article will be about binary programming, so I'll not talk in deep about binaries here. All that we need for now is to know that 1 byte = 1 color channel. The OpenGL can also work with more compressed format, which uses only 2 bytes per pixel. What that means? It means that each byte will store two color channels, including alpha. In very simple words, let's reduce the color's range of the image.

The OpenGL offers to us 3 compressed data types: <strong>GL_UNSIGNED_SHORT_4_4_4_4</strong>, <strong>GL_UNSIGNED_SHORT_5_5_5_1</strong>, <strong>GL_UNSIGNED_SHORT_5_6_5</strong>. The first two should be used when you have alpha channel and the last one it's only for situations without alpha. These 3 names tell us something about the pixel data. The numbers at the right instruct us about the number of bits (not bytes) that must be used in each channel (RGBA). Oh, and just to make it clear, each byte is composed by 8 bits. So in the first case 4 bits per channel with a total of 2 bytes per pixel. The second one, 5 bits to RGB channels and 1 bit to alpha with a total of 2 bytes per pixel. And the last one, 5 bits to R, 6 bits to G and 5 bits to B with a total of 2 bytes per pixel.

Here I want to make a warning: The type <strong>GL_UNSIGNED_SHORT_5_5_5_1</strong> is not really useful, because only 1 bit to alpha is the same as give it a Boolean data type, I mean, 1 bit to alpha means visible YES or NOT, just it. So this type is not really useful, it has less bits on Green channel than <strong>GL_UNSIGNED_SHORT_5_6_5</strong> and even can't produce real transparent effects like <strong>GL_UNSIGNED_SHORT_4_4_4_4</strong>. So if you need the alpha channel, make use of <strong>GL_UNSIGNED_SHORT_4_4_4_4</strong>, if not, make use of <strong>GL_UNSIGNED_SHORT_5_6_5</strong>.

A little thing to know about the <strong>GL_UNSIGNED_SHORT_5_6_5</strong>. As the human eye has more sensibility to the green colors, the channel with more bits is exactly the Green channel. At this way, even with a less color range, the resulting image to the final user will not seem to be that different.

Now let's take a look at the difference between both compressions.  
<img src='http://db-in.com/images/texture_compression_example.jpg'  alt="OpenGL optimized 2 bytes per pixel (bpp) data types." title="texture_compression_example" width="600" height="470" class="size-full wp-image-1393" />

As you saw, by using <strong>GL_UNSIGNED_SHORT_4_4_4_4</strong> could really be ugly in some situations. But the <strong>GL_UNSIGNED_SHORT_5_6_5</strong> looks very nice. Why? I'll explain in details at the next article about binaries, but in very simple words, by using <strong>GL_UNSIGNED_SHORT_4_4_4_4</strong> we have only 16 tonalities for each channel, including 16 tonalities to alpha. But with <strong>GL_UNSIGNED_SHORT_5_6_5</strong> we have 32 tonalities to Red and Blue and 96 tonalities of Green spectrum. It still far from the human eye's capacity, but remember that by using these optimizations we reduce 2 bytes per pixel in all our images, this represents much more performance to our renders.

Now it's time to learn how to convert our traditional images to these formats. Normally, when you extract the binary informations from an image you get it pixel by pixel, so probably each pixel will be composed by an "unsigned int" data type, which has 4 bytes. Each programming language provides a method(s) to extract the binary information from the pixels. Once you have your array of pixel data (array of unsigned int) you can use the following code to convert that data to the <strong>GL_UNSIGNED_SHORT_4_4_4_4</strong> or <strong>GL_UNSIGNED_SHORT_5_6_5</strong>.


<table width="675">  
<tbody>  
<tr>  
<th>Converting 4bpp to 2bpp</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">.  
typedef enum  
{
    ColorFormatRGB565,
    ColorFormatRGBA4444,
} ColorFormat;

static void optimizePixelData(ColorFormat color, int pixelDataLength, void *pixelData)  
{
    int i;
    int length = pixelDataLength;

    void *newData;

    // Pointer to pixel information of 32 bits (R8 + G8 + B8 + A8).
    // 4 bytes per pixel.
    unsigned int *inPixel32;

    // Pointer to new pixel information of 16 bits (R5 + G6 + B5)
    // or (R4 + G4 + B4 + A4).
    // 2 bytes per pixel.
    unsigned short *outPixel16;

    newData = malloc(length * sizeof(unsigned short));
    inPixel32 = (unsigned int *)pixelData;
    outPixel16 = (unsigned short *)newData;

    if(color == ColorFormatRGB565)
    {
        // Using pointer arithmetic, move the pointer over the original data.
        for(i = 0; i < length; ++i, ++inPixel32)
        {
            // Makes the convertion, ignoring the alpha channel, as following:
            // 1 -  Isolates the Red channel, discards 3 bits (8 - 3), then pushes to the final position.
            // 2 -  Isolates the Green channel, discards 2 bits (8 - 2), then pushes to the final position.
            // 3 -  Isolates the Blue channel, discards 3 bits (8 - 3), then pushes to the final position.
            *outPixel16++ = (((( *inPixel32 >> 0 ) & 0xFF ) >> 3 ) << 11 ) |
                            (((( *inPixel32 >> 8 ) & 0xFF ) >> 2 ) << 5 ) |
                            (((( *inPixel32 >> 16 ) & 0xFF ) >> 3 ) << 0 );
        }
    }
    else if(color == ColorFormatRGBA4444)
    {
        // Using pointer arithmetic, move the pointer over the original data.
        for(i = 0; i < length; ++i, ++inPixel32)
        {
            // Makes the convertion, as following:
            // 1 -  Isolates the Red channel, discards 4 bits (8 - 4), then pushes to the final position.
            // 2 -  Isolates the Green channel, discards 4 bits (8 - 4), then pushes to the final position.
            // 3 -  Isolates the Blue channel, discards 4 bits (8 - 4), then pushes to the final position.
            // 4 -  Isolates the Alpha channel, discards 4 bits (8 - 4), then pushes to the final position.
            *outPixel16++ = (((( *inPixel32 >> 0 ) & 0xFF ) >> 4 ) << 12 ) |
                            (((( *inPixel32 >> 8 ) & 0xFF ) >> 4 ) << 8 ) |
                            (((( *inPixel32 >> 16 ) & 0xFF ) >> 4 ) << 4 ) |
                            (((( *inPixel32 >> 24 ) & 0xFF ) >> 4 ) << 0 );
        }
    }

    free(pixelData);

    pixelData = newData;
}
.</pre>

The routine above assumes the channel order as RGBA. Although is not common, your image could have the pixels composed by another channel order, like ARGB or BGR. In these cases you must change the routine above or change the channel order at when extracting binary informations from each pixel. Another important thing is about the binary order. I don't want to confuse your mind if you don't know too much about binaries, but just as an advice: you probably will get the pixel data in little endian format, the traditional, but if your programming language get the binary informations as a big endian, the above routine will not work properly, so make sure your pixel data is in little endian format.

<br/><a name="pvrtc"></a>  
<h2>PVRTC</h2><a href="#list_contents">top</a>  
I sure you've heard about the texture compression format PVRTC, if you already feel confortable about this topic, just skip to the next one. PVRTC is a binary format created by "<a href='http://www.imgtec.com/'  target="_blank">Imagination Technology</a>", also called "<em>Imgtec</em>". This format uses the channel order as ARGB instead the traditional RGBA. To say the truth, its optimization is not about the file's size, if we look only to the size, any JPG is more compressed or even a PNG can be lighter. The PVRTC is optimized about its processing, its pixels can be already in the format 2 bytes per pixel (2bpp) or 4 bytes per pixel (4bpp). The data  inside PVRTC is OpenGL friendly and also can store Mipmap levels. So, could it be a good idea to always make use of PVRTC? Well, not exactly... let's see why.

The PVRTC format is not supported by default in OpenGL ES 2.0, there are informations that OpenGL ES 2.1 will come with native support to PVRTC textures, but what we have for now is just the OpenGL ES 2.0. To use PVRTC on it, just as the Multisampling, you need a vendors plug-in. In the case of Apple, this plugin has four new constant values. OpenGL provides a function to upload pixel data from the compressed formats, like PVRTC:


<table width="675">  
<tr>  
<th>Uploading PVRTC</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glCompressedTexImage2D (GLenum target, GLint level, GLenum internalformat, GLsizei width, GLsizei height, GLint border, GLsizei imageSize, const GLvoid* data)</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: The target always will be GL_TEXTURE_2D, this is just an internal convention for OpenGL.</li>
    <li><strong>level</strong>: The number of Mipmap levels in the file.</li>
    <li><strong>internalformat</strong>: The format of the PVRTC. This parameter can be:
<ul>  
    <li><strong>GL_COMPRESSED_RGB_PVRTC_2BPPV1_IMG</strong>: Files using 2bpp without the alpha channel.</li>
    <li><strong>GL_COMPRESSED_RGBA_PVRTC_2BPPV1_IMG</strong>: Files using 2bpp and the alpha channel.</li>
    <li><strong>GL_COMPRESSED_RGB_PVRTC_4BPPV1_IMG</strong>: Files using the 4bpp without the alpha channel.</li>
    <li><strong>GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG</strong>: Files using 4bpp and the alpha channel.</li>
</ul></li>  
    <li><strong>width</strong>: The width of the image.</li>
    <li><strong>height</strong>: The height of the image.</li>
    <li><strong>border</strong>: This parameter is ignored in OpenGL ES. Always use the value 0. This is just an internal constant to conserve the compatibly with the desktop versions.</li>
    <li><strong>imageSize</strong>: The number of the bytes in the binary data.</li>
    <li><strong>data</strong>: The binary data for the image.</li>
</td>  
</tr>  
</table>


As you can imagine, the data format <strong>GL_UNSIGNED_SHORT_4_4_4_4</strong> or <strong>GL_UNSIGNED_SHORT_5_6_5</strong> is chosen based on the file format, RGB or RGBA with 2bpp or 4bpp, depends on.

To generate the PVRTC you have many options. The two most common is the Imgtec Tools or the Apple's Texture Tool. Here you can find the <a href='http://www.imgtec.com/powervr/insider/powervr-utilities.asp'  target="_blank">Imgtec Tools</a>.  The Apple tool comes with iPhone SDK, it's located at the path "<em><Xcode Folder>/iPhoneOS.platform/Developer/usr/bin</em>" the name is "<em>texturetool</em>", you can find all informations about it at <a href='http://developer.apple.com/library/ios/' #documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/TextureTool/TextureTool.html" target="_blank">Apple Texture Tool</a>.

I'll explain how to use the Apple tool here. Follow these steps:  

<ul>  
    <li>Open the Terminal.app (usually it is in /Applications/Utilities/Terminal.app)</li>
    <li>Click on <strong>texturetool</strong> in Finder and drag & drop it on Terminal window. Well, you also can write the full path "<em><Xcode Folder>/iPhoneOS.platform/Developer/usr/bin/texturetool</em>", I preffer drag & drop.</li>
    <li>Write in front of texture tool path: " -e PVRTC --channel-weighting-linear --bits-per-pixel-2 -o "</li>
    <li>Now you should write the output path, again, I prefer drag & drop the file from the Finder to the Terminal window and rename its extension. The extension really doesn't matter, but my advice is to write something that let you identify the file format, like pvrl2 for Channel Weighting Linear with 2bpp.</li>
    <li>Finally, add an space and write the input file. Guess... I prefer to drag & drop from the Finder. The input files must be PNG or JPG files only.</li>
    <li>Hit "Enter"</li>
</ul>



<table width="675">  
<tbody>  
<tr>  
<th>Terminal Script to Generate PVRTC with Texturetool</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">.  
/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/texturetool
 -e PVRTC --channel-weighting-linear --bits-per-pixel-2 -o 
/Texture/Output/Path/Texture.pvrl2 /Texture/Intput/Path/Texture.jpg
.</pre>

Good, now you have a PVRTC file. The problem with the Apple tool is that it doesn't generate the traditional PVRTC binary header. It's composed by 52 bytes at the beginning of the file and gives instructions about the height and width of the image, number of Mipmaps on it, bpp, channels order, alpha, etc. In the traditional PVRTC files, this is the header format:  

<ul>  
    <li><strong>unsigned int (4 bytes)</strong>: Header Length in Bytes. Old PVRTC has a header of 44 bytes instead 52.</li>
    <li><strong>unsigned int (4 bytes)</strong>: Height of the image. PVRTC only accepts squared images (width = height) and POT sizes (Power of Two)</li>
    <li><strong>unsigned int (4 bytes)</strong>: Width of the image. PVRTC only accepts squared images (width = height) and POT sizes (Power of Two).</li>
    <li><strong>unsigned int (4 bytes)</strong>: Number of Mipmaps.</li>
    <li><strong>unsigned int (4 bytes)</strong>: Flags.</li>
    <li><strong>unsigned int (4 bytes)</strong>: Data Length of the image.</li>
    <li><strong>unsigned int (4 bytes)</strong>: The bpp.</li>
    <li><strong>unsigned int (4 bytes)</strong>: Bitmask Red.</li>
    <li><strong>unsigned int (4 bytes)</strong>: Bitmask Green.</li>
    <li><strong>unsigned int (4 bytes)</strong>: Bitmask Blue.</li>
    <li><strong>unsigned int (4 bytes)</strong>: Bitmask Alpha.</li>
    <li><strong>unsigned int (4 bytes)</strong>: The PVR Tag.</li>
    <li><strong>unsigned int (4 bytes)</strong>: Number of Surfaces.</li>
</ul>


But, using the Apple Texture Tool we don't have the file header and without that header we can't find neither the width nor height of the file from our code. So to use PVRTC from Apple tool you should know about bpp, width, height and alpha. Kind of annoying, no?

Well... I have good news for you. I found a way, a trick, to extract informations from the PVRTC generated by Apple tool. This trick works fine, but it can't identify informations about the Mipmap, but this is not a problem, because Apple tool doesn't generate Mipmaps anyway.


<table width="675">  
<tbody>  
<tr>  
<th>Extracting Infos From PVRTC Without Header</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">.  
// Supposing the bpp of the image is 4, calculates its squared size.
float size = sqrtf([data length] * 8 / 4);

// Checks if the bpp is really 4 by comparing the rest of division by 8,
// the minimum size of PVRTC, if the rest is zero then this image really
// has 4 bpp, otherwise, it has 2 bpp.
bpp = ((int)size % 8 == 0) ? 4 : 2;

// Knowing the bpp, calculates the width and height
// based on the data size.
width = sqrtf([data length] * 8 / bpp);  
height = sqrtf([data length] * 8 / bpp);  
length = [data length];  
.</pre>

The PVRTC files made from Texturetool doesn't have any header, so its image data starts in the first byte of the file. And what about the alpha? Could you ask. Well, the alpha will be more dependent of your EAGL context configuration. If you are using RGBA8, assume the alpha exist and use the <strong>GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG</strong> or <strong>GL_COMPRESSED_RGBA_PVRTC_2BPPV1_IMG</strong>, based on the informations that you extract from the code above. If your EAGL context uses RGB565, so assume <strong>GL_COMPRESSED_RGB_PVRTC_4BPPV1_IMG</strong> or <strong>GL_COMPRESSED_RGB_PVRTC_2BPPV1_IMG</strong>.

Now to use your PVRTC on OpenGL ES 2.0, it's very simple, you don't need to change almost anything, you will create your texture normally, just replace the call to <strong>glTexImage2D</strong> by <strong>glCompressedTexImage2D</strong> function.


<table width="675">  
<tbody>  
<tr>  
<th>Uploading PVRTC to OpenGL</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">.  
// format = one of the GL_COMPRESSED_RGB* constants.
// width = width extract from the code above.
// height = height extract from the code above.
// length = length extract from the code above.
// data = the array of pixel data loaded via NSData or any other binary class.

// You probably will use NSData to load the PVRTC file.
// By using "dataWithContentsOfFile" or similar NSData methods.

glCompressedTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, length, data);  
.</pre>

Well done, this is all about PVRTC. But my last advice about this topic is, always avoid to use PVRTC. The cost X benefit is not so good. Remember you just need to parse an image file once to OpenGL, so PVRTC doesn't offers a great optimization.

<br/><a name="off-screen"></a>  
<h2><strong>The Off-Screen Render</strong></h2>  

<p>Until now we've just talked about render to the screen, "on the screen", "on the device", but we also have another surface to render, the off-screen surfaces. You remember it from the EGL article, right? (<a href='http://blog.db-in.com/khronos-egl-and-apple-eagl/'  target="_blank">EGL and EAGL article</a>).</p>

<p>What is the utility of  an off-screen render? We can take a snapshot from the current frame and save it as an image file, but the most important thing about off-screen renders is to create an OpenGL texture with the current frame and then use this new internal texture to make a reflection map, a real-time reflection. I'll not talk about the reflections here, this subject is more appropriated to a tutorial specific about the shaders and lights, let's focus only about how to render to an off-screen surface. We'll need to know a new function:</p>

<table width="675">  
<tr>  
<th>Off-Screen Render</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glFramebufferTexture2D(GLenum target, GLenum attachment, GLenum textarget, GLuint texture, GLint level)</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: The target always will be <strong>GL_FRAMEBUFFER</strong>, this is just an internal convention for OpenGL.</li>
    <li><strong>attachment</strong>: This specifies what kind of render buffer we want to render for This parameter can be:
<ul>  
    <li><strong>GL_COLOR_ATTACHMENT0</strong> to the Color Render Buffer.</li>
    <li><strong>GL_DEPTH_ATTACHMENT</strong> to a Depth Render Buffer.</li>
</ul></li>  
    <li><strong>textarget</strong>: The type of texture, to 2D texture this parameter always will be <strong>GL_TEXTURE_2D</strong>. If your texture is a 3D texture (Cube Map), you can use one of its faces as this parameter: GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y or GL_TEXTURE_CUBE_MAP_NEGATIVE_Z.</li>
    <li><strong>texture</strong>: The texture object target.</li>
    <li><strong>level</strong>: Specifies the Mipmap level for the texture.</li>
</ul>  
</td>  
</tr>  
</table>

<p>To use this function we need to first create the target texture. We can do it just as before (<a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-2/' #textures" target="_blank">check out the texture functions here</a>). Then we call <strong>glFramebufferTexture2D</strong> and proceed normally with our render routine. After drawing something (glDraw* callings) that texture object will be filled and you can use it for anything you want. Here is an example:</p>

<table width="675">  
<tbody>  
<tr>  
<th>Drawing to Off-Screen Surface</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">.  
// Create and bind the Frame Buffer.
// Create and attach the Render Buffers, except the render buffer which will
// receive the texture as attachment.

GLuint _texture;  
glGenTextures(1, &_texture);  
glBindTexture(GL_TEXTURE_2D, _texture);  
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,  textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);  
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _texture, 0);  
.</pre>

<p>Differently than creating a texture for a pixel data, at this time you set the texture data to <strong>NULL</strong>. Because they will be filled up dynamically later on. If you intend to use the output image as a texture for another draw, remember first to draw the objects that will fill the output texture.</p>

<p>Well, as any Frame Buffer operation, it's a good idea to check <strong>glCheckFramebufferStatus</strong> to see if everything was attached OK. A new question comes up: "If I want to save the resulting texture to a file, how could I retrieve the pixel data from the texture?", OpenGL is a good mother, she gives us this function:</p>

<table width="675">  
<tr>  
<th>Getting Pixel Data from Texture</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glReadPixels (GLint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid* pixels)</strong></h5><br/>  
<ul>  
    <li><strong>x</strong>: The X position to start getting pixel data. Remember that the pixel order in the OpenGL starts in the lower left corner and goes to up right corner.</li>
    <li><strong>y</strong>: The Y position to start getting pixel data.  Remember that the pixel order in the OpenGL starts in the lower left corner and goes to up right corner.</li>
    <li><strong>width</strong>: The Width to get the pixel data. Can't be greater than the original render buffer.</li>
    <li><strong>height</strong>: The Width to get the pixel data. Can't be greater than the original render buffer.</li>
    <li><strong>format</strong>: Always use <strong>GL_RGB</strong>, there are other formats, but it is implementation dependent and can vary depending on your vendors. For example, to get Alpha information, it depends on your EGL context configuration, which is vendors dependent.</li>
    <li><strong>type</strong>: Always use <strong>GL_UNSIGNED_BYTE,</strong>, there are other formats, but it is implementation dependent and can vary depending on your vendors.</li>
    <li><strong>pixels</strong>: A pointer to return the pixel data.</li>
</ul>  
</td>  
</tr>  
</table>

<p>As you've saw, the function is very easy, you can call it any time you want. Just remember a very important thing: "OpenGL Pixel Order!", it starts in the lower left corner and goes to the up right corner. To a traditional image file, that means the image is flipped vertically, so if you want to save to a file, take care with it.</p>

<p>Now you must to do the invert path you are used to when import a texture. Now you have the pixel data and want to construct a file. Fortunately many languages offers a simple way to construct an imagem from pixel data. For example, with Cocoa Touch (Objective-C), we can use NSData + UIImage, at this way:</p>

<table width="675">  
<tbody>  
<tr>  
<th>Drawing to Off-Screen Surface</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">.  
// The pixelData variable is a "void *" initialized with memory allocated.

glReadPixels (0, 0, 256, 256, GL_RGB, GL_UNSIGNED_BYTE, pixelData);  
UIImage *image = [UIImage imageWithData:[NSData dataWithBytes:pixelData length:256 * 256]];

// Now you can save the image as JPG or PNG.
[UIImageJPEGRepresentation(image, 100) writeToFile:@"A path to save the file" atomically:YES];
.</pre>

<p>Just a little question, <strong> glReadPixels </strong> will read from where? OpenGL State Machine, do you remember? <strong> glReadPixels </strong> will read the pixels from the "last Frame Buffer bound".</p>

<p>Now it's time to talk more about optimization.</p>

<p><br/><a name="tips_tricks"></a>  </p>

<h2><strong>Tips and Tricks</strong></h2><a href="#list_contents">top</a>  
I want to talk now about some tips and tricks to boost your application. I don't want to talk about little optimizations which make you gain 0.001 secs, no. I want to talk about real optimizations. That ones which can boost 0.5 secs or even increase your render frame rate.

<br/><a name="cache"></a>  
<h3>The Cache</h3><a href="#list_contents">top</a>  
This is very important, I really love it, I'm used to use it on everything, it's great! Imagine this situation, the user touch the objects on the screen to make a rotation. Now the user touch another object, but the first doesn't change anything. So it would be great to make the first object's transformation matrix a cached matrix, instead to recalculating the first object's matrix at each frame.

The cached concept extends even to other areas, like cameras, lights and quaternions. Instead to recalculate something at each frame, use a little BOOL data type to check if a matrix or even a value is cached or not. The following pseudo-code shows how is simple to work with cache concept.


<table width="675">  
<tbody>  
<tr>  
<th>Cache Concept</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">.  
bool _matrixCached;  
float _changeValue;  
float *_matrix;

float *matrix(void)  
{
    if (!_matrixCached)
    {
        // Do changes into _matrix.

        _matrixCached = TRUE;
    }

    return _matrix;
}

void setChange(float value)  
{
    // Change the _changeValue which will affect the matrix.

    _matrixCached = FALSE;
}
.</pre>

<br/><a name="store_values"></a>  
<h3>Store the Values</h3><a href="#list_contents">top</a>  
We are used to change the matrix (or the quaternion) every time that occurs transformation. For example, if our code make the changes: translate X - we change the resulting matrix, translate Y - we change the matrix, rotate Z - change the matrix and scale Y - change the matrix. Some 3D engines and people do not even hold on a value to those transformations, so if the code need to retrieve those values, they extract the values directly from the resulting matrix. But this is not the best approach. A great optimization could be reached if we store the values independently, like translations X,Y and Z, rotations X,Y and Z and scales X,Y and Z.

By storing the values you can make a single change into the resulting matrix, making the calculations once per frame instead of making the calculus at every transformation. The following pseudo-code can help you to understand the Store concept better:


<table width="675">  
<tbody>  
<tr>  
<th>Store Concept</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">.  
float _x;  
float _y;  
float _z;

float x(void) { return _x; }  
float setX(float value)  
{
    _x = value;
}

float y(void) { return _y; }  
float setY(float value)  
{
    _y = value;
}

float z(void) { return _z; }  
float setZ(float value)  
{
    _z = value;
}

float *matrix(void)  
{
    // This function will be called once per frame.
    // Make the changes to the matrix based on _x, _y and _z.
}
.</pre>

<br/><a name="c_fastest"></a>  
<h3>C is always the fast language</h3><a href="#list_contents">top</a>  
This tip is just something to remember. You probably know that, but it's very important to reinforce. C is always the fastest language. No other language can be faster than C. It's the most basic language and it's the great father of almost all computer languages. So always try to use C in the most critical parts of the code. Specially for render routines.

String comparisons with C is around 4x faster than Objective-C comparisons. So if you need to check some string value at the render time, prefer to convert it from NSString to C string (char *) and make the comparison, even if you need re-convert from C string to NSString again, even in these cases C string is faster. To compare C strings you know, just use <strong>if (strcmp(string1, string2) == 0)</strong>.

Specially for numbers, always use basic C data types (<strong>float</strong>, <strong>int</strong>, <strong>short</strong>, <strong>char</strong> and their unsigned versions). Besides, avoid at maximum value that use 64 bits, like <strong>long</strong> or <strong>double</strong> data type. Remember that OpenGL ES doesn't support, by default, 64 bits data types.

<br><a name="conclusion"></a>  
<h2><strong>Conclusion</strong></h2><a href="#list_contents">top</a>  
OK dudes, we're at the end of our objective with this serie. I'm sure now you know a lot of things about OpenGL and 3D world. I have covered almost all about the OpenGL in those 3 tutorials of this serie. I hope you learned all the concepts about the subjects covered in these tutorials.

Now, as we are used, let's remember everything:


<ul>  
    <li>2D graphics with OpenGL can be done by two ways: with or without Depth Render Buffer.</li>
    <li>When using Depth Render Buffer, it could also be good to make use of a camera with Orthographic projection.</li>
    <li>Independent of the way you choose, always use the Grid concept with 2D graphics.</li>
    <li>The Multisampling filter is a plug-in that depends the vendors implementation. Multisampling always has a big cost on performance, use it only in special situations.</li>
    <li>Always try to optimize your textures to a 2bpp data format.</li>
    <li>You can use PVRTC in your application to save some time when creating an OpenGL texture from a file.</li>
    <li>Always try to use the Cache concept when working with matrices.</li>
    <li>Make use of the Store Values concept to save CPU processing.</li>
    <li>Prefer basic C language on the critical render routines.</li>
</ul>


Well, you know, if you have some doubt, just ask me, let a comment bellow and if I can help I'll be glad.

<br/>  
<h2><strong>From here and beyond</strong></h2>  

<p>Well, and now? Is this all? No. It will never be enough! The points and lines deserve a special article. With points in can make particles and some cool effects. As I told at the beginning of this tutorial, you can use points with 2D graphics instead squares, in case of no Depth Render Buffer.</p>

<p>And about the shaders? In deep? The Programmable Pipeline gives us a whole new world of programming. We should talk about the Surface Normals VS Vertex Normals, about the tangent space, the normal bump effect, the reflection and refraction effect. To say the truth... I think we need a new serie of tutorial called "All About OpenGL Shaders". Well, this could be my next serie.</p>

<p>But I want to hear from you, tell me here or on Twitter what you want to know more about. <br />
Just Tweet me: <br />
<a href='http://twitter.com/share'  class="twitter-share-button" data-text="Hey @dineybomfim talk more about: " data-count="none" data-url="none">Tweet</a><script type="text/javascript" src='http://platform.twitter.com/widgets.js' ></script></p>

<p>Thanks again for reading. <br />
See you in the next tutorial!</p>

<iframe scrolling="no" src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  width="100%" height="130px"></iframe>]]></description><link>http://blog.db-in.com/all-about-opengl-es-2-x-part-3/</link><guid isPermaLink="false">23053c0f-0f8b-4fdf-8a16-414630564e06</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Tue, 04 Feb 2014 01:46:29 GMT</pubDate></item><item><title><![CDATA[All about OpenGL ES 2.x - (part 2&#x2F;3)]]></title><description><![CDATA[<p><img src='http://db-in.com/images/opengl_part2.jpg'  alt="" title="opengl_part2" width="350" height="350" class="alignright size-full wp-image-1291" /> <br />
Very welcome back, my friends!</p>

<p>Now we are aware about the basic concepts of 3D world and OpenGL. Now it's time to start the fun! Let's go deep into code and see some result on our screens. Here I'll show you how to construct an OpenGL application, using the best practices.</p>

<p>If you lost the first one, you can check the parts of this serie below.</p>

<p>This serie is composed by 3 parts:  </p>

<ul>  
    <li><a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-1'  target="_blank">Part 1 - Basic concepts of 3D world and OpenGL (Beginners)</a></li>
    <li>Part 2 - OpenGL ES 2.0 in-depth (Intermediate)</li>
    <li><a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-3'  target="_blank">Part 3 - Jedi skills in OpenGL ES 2.0 and 2D graphics (Advanced)</a></li>
</ul>  

<!--more-->  

<p>Before start, I want to say something: is Thank You! <br />
The first tutorial of this serie became much much bigger than I could imagine. When I saw the news at the home of <a href='http://www.opengl.org/'  target="_blank">http://www.opengl.org</a> website, I was speechless, stunned, that was really amazing!!! <br />
So I want to say again, Thank you so much!</p>

<p><a name="list_contents"></a> <br />
Here is a little list of contents to orient your reading:  </p>

<table width="675">  
<tr>  
<th colspan=2>List of Contents to this Tutorial</th>  
</tr>  
<tr><td valign="top">  
<ul>  
    <li><a href="#download_project">Download the OpenGL ES 2.0 iPhone project</a></li>
    <li><a href="#data_types">OpenGL data types and programmable pipeline</a></li>
    <li><a href="#primitives">Primitives</a>
        <ul>
            <li><a href="#meshes_lines_optimization">Meshes and Lines Optimization</a></li>
        </ul></li>
    <li><a href="#buffers">Buffers</a>
        <ul>
            <li><a href="#frame_buffers">Frame buffer</a></li>
            <li><a href="#render_buffers">Render buffer</a></li>
            <li><a href="#buffer_object">Buffer Object</a></li>
        </ul></li>
    <li><a href="#textures">Textures</a></li>
    <li><a href="#rasterize">Rasterize</a>
        <ul>
            <li><a href="#face_culling">Face Culling</a></li>
            <li><a href="#per-fragment">Per-Fragment Operations</a></li>
        </ul></li>
</ul>  
</td>  
<td valign="top">  
<ul>  
    <li><a href="#shaders">Shaders</a>
        <ul>
            <li><a href="#shader_program">Shader and Program Creation</a></li>
            <li><a href="#shader_language">Shader Language</a></li>
            <li><a href="#vertex_fragment">Vertex and Fragment Structures</a></li>
            <li><a href="#attributes_uniforms">Setting the Attributes and Uniforms</a></li>
            <li><a href="#using_buffer_objects">Using the Buffer Objects</a></li>
        </ul></li>
    <li><a href="#rendering">Rendering</a>
        <ul>
            <li><a href="#pre-render">Pre-Render</a></li>
            <li><a href="#drawing">Drawing</a></li>
            <li><a href="#render">Render</a></li>
        </ul></li>
    <li><a href="#conclusion">Conclusion</a></li>
</ul>  
</td></tr>  
</table>

<p><br/>  </p>

<h2><strong>At a glance</strong></h2>  

<p>As asked in comments below, here is a PDF file for those of you who  prefer to read this tutorial on a file instead of here in the blog. <br />
<a href='http://db-in.com/downloads/all_about_opengl_es_2x.pdf'  onmousedown="_gaq.push(['_trackEvent', 'All About OpenGL', 'PDF', 'Download']);" target="blank"><img class="alignleft" title="download" src='http://db-in.com/imgs/pdf_button.png'  alt="Download this tutorial in PDF"/> <br />
<strong>Download now</strong> <br />
PDF file with navigation links <br />
1.3Mb <br />
</a><br/></p>

<p>Remembering the first part of this serie, we've seen:  </p>

<ol>  
    <li>OpenGL’s logic is composed by just 3 simple concepts: Primitives, Buffers and Rasterize.</li>
    <li>OpenGL works with fixed or programmable pipeline.</li>
    <li>Programmable pipeline is synonymous of Shaders: Vertex Shader and Fragment Shader.</li>
</ol>

<p>Here I'll show code more based in C and Objective-C. In some parts I'll talk specifically about iPhone and iOS. But in general the code will be generic to any language or platform. As OpenGL ES is the most concise API of OpenGL, I'll focus on it. If you're using OpenGL or WebGL you could use all the codes and concepts here. <br />
<a name="download_project"></a> <br />
The code in this tutorial is just to illustrate the functions and concepts, not real code. In the link bellow you can get a Xcode project which uses all the concepts and code of this tutorial. I made the principal class (CubeExample.mm) using Objective-C++ just to make clearly to everybody how the OpenGL ES 2.0 works, even those which don't use Objective-C. This training project was made for iOS, more specifically targeted for iPhone.</p>

<p><a href='http://db-in.com/downloads/ogles2_cube_example.zip'  onmousedown="_gaq.push(['_trackEvent', 'All About OpenGL', 'Xcode', 'Download']);"><img class="alignleft" title="download" src='http://db-in.com/imgs/download_button.png'  alt="Download Xcode project files to iPhone"/> <br />
<strong>Download now</strong> <br />
Xcode project files to iOS 4.2 or later <br />
172kb <br />
</a><br/></p>

<p>Here I'll use OpenGL functions following the syntax: gl + FunctionName. Almost all implementation of OpenGL use the prefix "gl" or "gl.". But if your programming language don't use this, just ignore this prefix in the following lines. <br />
<a name="data_types"></a> <br />
Another important thing to say before we start is about OpenGL data types. As OpenGL is multiplatform and depends of the vendors implementation, many data type could change from one programming language to another. For example, a float in C++ could represent 32 bits exactly, but in JavaScript a float could be only 16 bits. To avoid these kind of conflict, OpenGL always works with it's own data types. The OpenGL's data type has the prefix "GL", like <strong>GLfloat</strong> or <strong>GLint</strong>. Here is a full list of OpenGL's data type:</p>

<table width="675">  
<tr>  
<th>OpenGL's Data Type</th><th>Same as C</th><th>Description</th>  
</tr>  
<tr><td><strong>GLboolean</strong> (1 bits)</td><td>unsigned char</td><td>0 to 1</td></tr>  
<tr><td><strong>GLbyte</strong> (8 bits)</td><td>char</td><td>-128 to 127</td></tr>  
<tr><td><strong>GLubyte</strong> (8 bits)</td><td>unsigned char</td><td>0 to 255</td></tr>  
<tr><td><strong>GLchar</strong> (8 bits)</td><td>char</td><td>-128 to 127</td></tr>  
<tr><td><strong>GLshort</strong> (16 bits)</td><td>short</td><td>-32,768 to 32,767</td></tr>  
<tr><td><strong>GLushort</strong> (16 bits)</td><td>unsigned short</td><td>0 to 65,353</td></tr>  
<tr><td><strong>GLint</strong> (32 bits)</td><td>int</td><td>-2,147,483,648 to 2,147,483,647</td></tr>  
<tr><td><strong>GLuint</strong> (32 bits)</td><td>unsigned int</td><td>0 to 4,294,967,295</td></tr>  
<tr><td><strong>GLfixed</strong> (32 bits)</td><td>int</td><td>-2,147,483,648 to 2,147,483,647</td></tr>  
<tr><td><strong>GLsizei</strong> (32 bits)</td><td>int</td><td>-2,147,483,648 to 2,147,483,647</td></tr>  
<tr><td><strong>GLenum</strong> (32 bits)</td><td>unsigned int</td><td>0 to 4,294,967,295</td></tr>  
<tr><td><strong>GLdouble</strong> (64 bits)</td><td>double</td><td>−9,223,372,036,854,775,808 to 9,223,372,036,854,775,807</td></tr>  
<tr><td><strong>GLbitfield</strong> (32 bits)</td><td>unsigned int</td><td>0 to 4,294,967,295</td></tr>  
<tr><td><strong>GLfloat</strong> (32 bits)</td><td>float</td><td>-2,147,483,648 to 2,147,483,647</td></tr>  
<tr><td><strong>GLclampx</strong> (32 bits)</td><td>int</td><td>Integer clamped to the range 0 to 1</td></tr>  
<tr><td><strong>GLclampf</strong> (32 bits)</td><td>float</td><td>Floating-point clamped to the range 0 to 1</td></tr>  
<tr><td><strong>GLclampd</strong> (64 bits)</td><td>double</td><td>Double clamped to the range 0 to 1</td></tr>  
<tr><td><strong>GLintptr</strong></td><td>int</td><td>pointer *</td></tr>  
<tr><td><strong>GLsizeiptr</strong></td><td>int</td><td>pointer *</td></tr>  
<tr><td><strong>GLvoid</strong></td><td>void</td><td>Can represent any data type</td></tr>  
</table>

<p>A very important information about Data Types is that OpenGL ES does NOT support 64 bits data types, because embedded systems usually need performance and several devices don't support 64 bits processors. By using the OpenGL data types, you can easily and safely move your OpenGL application from C++ to JavaScript with less changes, for example.</p>

<p>One last thing to introduce is the graphics pipeline. We'll use and talk about the Programmable Pipeline a lot, here is a visual illustration: <br />
<img src='http://db-in.com/images/programmable_pipepline_example.png'  alt="The OpenGL programmable pipeline." title="programmable_pipepline_example" width="600" height="450" class="size-full wp-image-998" /></p>

<p>We'll talk deeply about each step in that diagram. The only thing I want to say now is about the "Frame Buffer" in the image above. The Frame Buffer is marked as optional because you have the choice of don't use it directly, but internally the OpenGL's core always will work with a Frame Buffer and a Color Render Buffer at least.</p>

<p>Did you notice the EGL API in the image above? <br />
This is a very very important step to our OpenGL's application. Before start this tutorial we need to know at least the basic concept and setup about EGL API. But EGL is a dense subject and I can't place it here. So I've created an article to explain that. You can check it here: <a href='http://blog.db-in.com/khronos-egl-and-apple-eagl/'  target="_blank">EGL and EAGL APIs</a>. I really recommend you read that before to continue with this tutorial. </p>

<p>If you've read or already know about EGL, let's move following the order of the first part and start talking about the Primitives.</p>

<p><br/><a name="primitives"></a>  </p>

<h2><strong>Primitives</strong></h2><a href="#list_contents">top</a>  
Do you remember from the first part, when I said that Primitives are Points, Lines and Triangles?  
All of them use one or more points in space to be constructed, also called vertex.  
A vertex has 3 informations, X position, Y position and Z position. A 3D point is constructed by one vertex, a 3D line is composed by two vertices and a triangle is formed by three vertices. As OpenGL always wants to boost the performance, all the informations should be a single dimensional array, more specifically an array of float values. Like this:


<pre class="brush:csharp">  
GLfloat point3D = {1.0,0.0,0.5};  
GLfloat line3D = {0.5,0.5,0.5,1.0,1.0,1.0};  
GLfloat triangle3D = {0.0,0.0,0.0,0.5,1.0,0.0,1.0,0.0,0.0};  
</pre>


As you can see, the array of floats to OpenGL is in sequence without distinction between the vertices, OpenGL will automatically understand the first value as the X value, the second as Y value and the third as the Z value. OpenGL will loop this interpretation at every sequence of 3 values. All that you need is inform to OpenGL if you want to construct a point, a line or a triangle. An advanced information is which you can customize this order if you want and the OpenGL could work with a fourth value, but this is a subject to advanced topics. For now assume that the order always will be X,Y,Z.

The coordinates above will construct something like this:

<img src='http://db-in.com/images/primitives_example.gif'  alt="The three primitives in OpenGL" title="primitives_example" width="600" height="600" class="size-full wp-image-845" />

In this image, the dashed orange lines is just an indication to you see more clearly where the vertices are related to the floor. Until here seems very simple! But now a question comes up: "OK, so how could I transform my 3D models from 3DS Max or Maya into an OpenGL's array?"

When I was learning OpenGL, I thought that could exist some 3D file formats we could import directly into OpenGL. "After all, the OpenGL is the most popular Graphics Library and is used by almost all 3D softwares! I'm sure it has some methods to import 3D files directly!"

Well, I was wrong! Bad news.

I've learned and need to say you: remember that OpenGL is focused on the most important and hard part of the 3D world construction. So it should not be responsible by fickle things, like 3D file formats. Exist so many 3D file formats, .obj, .3ds, .max, .ma, .fbx, .dae, .lxo... it's too much to the OpenGL and the Khronos worrying about.

But the Collada format is from Khronos, right? So would I expect that, one day, OpenGL will be able to import Collada files directly? No! Don't do this. Accept this immutable truth, OpenGL does not deal with 3D files!

OK, so what we need to do to import 3D models from 3D softwares into our OpenGL's application? Well my friend, unfortunately I need to say you: you will need a 3D engine or a third party API. There's no easy way to do that.

If you choose a 3D engine, like PowerVR, SIO2, Oolong, UDK, Ogre and many others, you'll be stuck inside their APIs and their implementation of OpenGL. If you choose a third party API just to load a 3D file, you will need to integrate the third party class to your own implementation of OpenGL.

Another choice is to search a plugin to your 3D software to export your objects as a .h file. The .h is just a header file containing your 3D objects in the OpenGL array format. Unfortunately, until today I just saw 2 plugins to do this: One to Blender made with Phyton and another made with Pearl and both was horribles. I never seen plugins to Maya, 3DS Max, Cinema 4D, LightWave, XSI, ZBrush or Modo.

I wanna give you another opportunity, buddy. Something called NinevehGL!  
I'll not talk about it here, but it's my new 3D engine to OpenGL ES 2.x made with pure Objective-C. I offer you the entire engine or just the parse API to some file formats as .obj and .dae. Whatever you prefer. You can check the NinevehGL's website here:  
<a href='http://nineveh.gl/'  target="_blank">http://nineveh.gl</a>

What is the advantage of NinevehGL? Is to KEEP IT SIMPLE! The others 3D engines is too big and unnecessary expensive. NinevehGL is free!

OK, let's move deeply into primitives.

<br/><a name="meshes_lines_optimization"></a>  
<h3>Meshes and Lines Optimization</h3><a href="#list_contents">top</a>  
A 3D point has only one way to be draw by OpenGL, but a line and a triangle have three different ways: normal, strip and loop for the lines and normal, strip and fan for the triangles. Depending on the drawing mode, you can boost your render performance and save memory in your application. But at the right time we'll discuss this, later on this tutorial.

For now, all that we need to know is that the most complex 3D mesh you could imagine will be made with a bunch of triangles. We call these triangles of "faces". So let's create a 3D cube using an array of vertices.


<pre class="brush:csharp">  
// Array of vertices to a cube.
GLfloat cube3D[] =  
{
    0.50,-0.50,-0.50,    // vertex 1
    0.50,-0.50,0.50,    // vertex 2
    -0.50,-0.50,0.50,    // vertex 3
    -0.50,-0.50,-0.50,    // vertex 4
    0.50,0.50,-0.50,    // vertex 5
    -0.50,0.50,-0.50,    // vertex 6
    0.50,0.50,0.50,        // vertex 7
    -0.50,0.50,0.50        // vertex 8
}
</pre>


The precision of the float numbers really doesn't matter to OpenGL, but it can save a lot of memory and size into your files (precision of 2 is 0.00 precision of 5 is 0.00000). So I always prefer to use low precision, 2 is very good!

I don't want to make you confused too soon, but has something you have to know. Normally meshes have three great informations: verticex, texture coordinates and normals. A good practice is to create one single array containing all these informations. This is called <strong>Array of Structures</strong>. A short example of it could be:


<pre class="brush:csharp">  
// Array of vertices to a cube.
GLfloat cube3D[] =  
{
    0.50,-0.50,-0.50,    // vertex 1
    0.00,0.33,            // texture coordinate 1
    1.00,0.00,0.00        // normal 1
    0.50,-0.50,0.50,    // vertex 2
    0.33,0.66,            // texture coordinate 2
    0.00,1.00,0.00        // normal 2
    ...
}
</pre>


You can use this construction technique for any kind of information you want to use as a per-vertex data. A question arises: Well, but at this way all my data must be of only one data type, GLfloat for example? Yes. But I'll show you later in this tutorial that this is not a problem, because to where your data goes, just accept floating-point values, so everything will be GLfloats. But don't worry with this now, in the right time you will understand.

OK, now we have a 3D mesh, so let's start to configure our 3D application and store this mesh into an OpenGL's buffer.

<br/><a name="buffers"></a>  
<h2><strong>Buffers</strong></h2><a href="#list_contents">top</a>  
Do you remember from the first part when I said that OpenGL is a state machine working like a Port Crane? Now let's refine a little that illustration. OpenGL is like a Port Crane with severeal arms and hooks. So it can hold many containers at the same time.

<img src='http://db-in.com/images/opengl_crane_hooks_example.gif'  alt="OpenGL is like a Port Crane with few arms and hooks." title="opengl_crane_hooks_example" width="600" height="400" class="size-full wp-image-1190" />

Basically, there are four "arms": texture arm (which is a double arm), buffer object arm (which is a double arm), render buffer arm and frame buffer arm. Each arm can hold only one container at a time. This is very important, so I'll repeat this: <strong>Each arm can hold only ONE CONTAINER AT A TIME!</strong> The texture and buffer object arms are double arms because can hold two different kinds of texture and buffer objects, respectively, but also only ONE KIND OF CONTAINER AT A TIME! We need is instruct the OpenGL's crane to take a container from the port, we can do this by informing the name/id of the container.

Backing to the code, the command to instruct OpenGL to "take a container" is: <strong>glBind*</strong>. So every time you see a glBindSomething you know, that is an instruction to OpenGL "take a container". Exist only one exception to this rule, but we'll discuss that later on. Great, before start binding something into OpenGL we need to create that thing. We use the <strong>glGen*</strong> function to generate a "container" name/id.

<br/><a name="frame_buffers"></a>  
<h3>Frame Buffers</h3><a href="#list_contents">top</a>  
A frame buffer is a temporary storage to our render output. Once our render is in a frame buffer we can choose present it into device's screen or save it as an image file or either use the output as a snapshot.  
This is the pair of functions related to frame buffers:


<table width="675">  
<tr>  
<th>FrameBuffer Creation</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glGenFramebuffers (GLsizei n, GLuint* framebuffers)</strong></h5><br/>  
<ul>  
    <li><strong>n</strong>: The number representing how many frame buffers's names/ids will be generated at once.</li>
    <li><strong>framebuffers</strong>: A pointer to a variable to store the generated names/ids. If more than one name/id was generated, this pointer will point to the start of an array.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glBindFramebuffer (GLenum target, GLuint framebuffer)</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: The target always will be GL_FRAMEBUFFER, this is just an internal convention for OpenGL.</li>
    <li><strong>framebuffers</strong>: The name/id of the frame buffer to be bound.</li>
</ul>  
</td>  
</tr>  
</table>


In deeply the creation process of an OpenGL Object will be done automatically by the core when we bind that object at the first time. But this process doesn't generates a name/id to us. So is advisable always use <strong>glGen*</strong> to create buffer names/ids instead create your very own names/ids. Seems confused?  
OK, let's go to our first lines of code and you'll understand more clearly:


<pre class="brush:csharp">  
GLuint frameBuffer;

// Creates a name/id to our frameBuffer.
glGenFramebuffers(1, &frameBuffer);

// The real Frame Buffer Object will be created here,
// at the first time we bind an unused name/id.
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);

// We can suppress the glGenFrambuffers.
// But in this case we'll need to manage the names/ids by ourselves.
// In this case, instead the above code, we could write something like:
//
// GLint frameBuffer = 1;
// glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
</pre>


The above code creates an instance of Gluint data type called frameBuffer. Then we inform the memory location of frameBuffer variable to the glGenFramebuffers and instruct this function to generate only 1 name/id (Yes, we can generate multiple names/ids at once). So finally we bind that generated frameBuffer to OpenGL's core. 

<br/><a name="render_buffers"></a>  
<h3>Render Buffers</h3><a href="#list_contents">top</a>  
A render buffer is a temporary storage for images coming from an OpenGL's render. This is the pair of functions related to render buffers:


<table width="675">  
<tr>  
<th>RenderBuffer Creation</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glGenRenderbuffers (GLsizei n, GLuint* renderbuffers)</strong></h5><br/>  
<ul>  
    <li><strong>n</strong>: The number representing how many render buffers's names/ids will be generated at once.</li>
    <li><strong>renderbuffers</strong>: A pointer to a variable to store the generated names/ids. If more than one name/id was generated, this pointer will point to the start of an array.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glBindRenderbuffer (GLenum target, GLuint renderbuffer)</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: The target always will be GL_RENDERBUFFER, this is just an internal convention for OpenGL.</li>
    <li><strong>renderbuffer</strong>: The render buffer name/id to be bound.</li>
</ul>  
</td>  
</tr>  
</table>


OK, now, before we proceed, do you remember from the first part when I said that render buffer is a temporary storage and could be of 3 types? So we need to specify the kind of render buffer and some properties of that temporary image. We set the properties to a render buffer by using this function:


<table width="675">  
<tr>  
<th>RenderBuffer Properties</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glRenderbufferStorage (GLenum target, GLenum internalformat, GLsizei width, GLsizei height)</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: The target always will be GL_RENDERBUFFER, this is just an internal convention for OpenGL.</li>
    <li><strong>internalformat</strong>: This specifies what kind of render buffer we want and what color format this temporary image will use. This parameter can be:
<ul>  
    <li><strong>GL_RGBA4</strong>, <strong>GL_RGB5_A1</strong> or <strong>GL_RGB56</strong> to a render buffer with final colors;</li>
    <li><strong>GL_DEPTH_COMPONENT16</strong> to a render buffer with Z depth;</li>
    <li><strong>GL_STENCIL_INDEX</strong> or <strong>GL_STENCIL_INDEX8</strong> to a render buffer with stencil informations.</li>
</ul>  
</li>  
    <li><strong>width</strong>: The final width of a render buffer.</li>
    <li><strong>height</strong>: The final height of a render buffer.</li>
</ul>  
</td>  
</tr>  
</table>


You could ask, "but I'll set these properties for which render buffer? How OpenGL will know for which render buffer is these properties?" Well, it's here that the great OpenGL's state machine comes up! The properties will be set to the last render buffer bound! Very simple.

Look at how we can set the 3 render buffers kind:


<pre class="brush:csharp">  
GLuint colorRenderbuffer, depthRenderbuffer, stencilRenderbuffer;  
GLint sw = 320, sh = 480; // Screen width and height, respectively.

// Generates the name/id, creates and configures the Color Render Buffer.
glGenRenderbuffers(1, &colorRenderbuffer);  
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);  
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA4, sw, sh);

// Generates the name/id, creates and configures the Depth Render Buffer.
glGenRenderbuffers(1, &depthRenderbuffer);  
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);  
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, sw, sh);

// Generates the name/id, creates and configures the Stencil Render Buffer.
glGenRenderbuffers(1, &stencilRenderbuffer);  
glBindRenderbuffer(GL_RENDERBUFFER, stencilRenderbuffer);  
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, sw, sh);  
</pre>


OK, but in our cube application we don't need stencil buffer, so let's optimize the above code:


<pre class="brush:csharp">  
GLuint *renderbuffers;  
GLint sw = 320, sh = 480; // Screen width and height, respectively.

// Let's create multiple names/ids at once.
// To do this we declared our variable as a pointer *renderbuffers.
glGenRenderbuffers(2, renderbuffers);

// The index 0 will be our color render buffer.
glBindRenderbuffer(GL_RENDERBUFFER, renderbuffers[0]);  
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA4, sw, sh);

// The index 1 will be our depth render buffer.
glBindRenderbuffer(GL_RENDERBUFFER, renderbuffers[1]);  
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, sw, sh);  
</pre>


At this point I need to make a digression.  
This step is a little bit different if you are in Cocoa Framework. The Apple doesn't allow to us put the OpenGL render directly onto device's screen, we need to place the output into a color render buffer and ask to the EAGL (The EGL's implementation by Apple) to present the buffer on the device's screen. As the color render buffer in this case is always mandatory, to set their properties we need to call a different method from the EAGLContext called <strong>renderbufferStorage:fromDrawable:</strong> and inform a CAEAGLLayer which we want to render onto. Seems confused? So is time to you make a digression in your reading and go to this article: <a href='http://db-in.com/blog/2011/02/khronos-egl-and-apple-eagl/'  target="_blank">Apple's EAGL</a>.  
In that article I explain what is the EAGL and how to use it.

Once knowing about EAGL, you use the following code to set the color render buffer's properties, instead glRenderbufferStorage:


<table width="675">  
<tr>  
<th>RenderBuffer Properties in case of Cocoa Framework</th>  
</tr>  
<tr>  
<td><h5><strong>- (BOOL) renderbufferStorage:(NSUInteger)target fromDrawable:(id<EAGLDrawable>)drawable</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: The target always will be GL_RENDERBUFFER, this is just an internal convention for OpenGL.</li>
    <li><strong>fromDrawable</strong>: Your custom instance of CAEAGLLayer.</li>
</ul>  
</td>  
</tr>  
</table>



<pre class="brush:csharp">  
// Suppose you previously set EAGLContext *_context
// as I showed in my EAGL article.

GLuint colorBuffer;

glGenRenderbuffers(1, & colorBuffer);  
glBindRenderbuffer(GL_RENDERBUFFER, colorBuffer);  
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:myCAEAGLLayer];
</pre>


When you call renderbufferStorage:fromDrawable: informing a CAEAGLLayer, the EAGLContext will take all relevant properties from the layer and will properly set the bound color render buffer.

Now is time to place our render buffers inside our previously created frame buffer. Each frame buffer can contain ONLY ONE render buffer of each type. So we can't have a frame buffer with 2 color render buffers, for example. To attach a render buffer into a frame buffer, we use this function:


<table width="675">  
<tr>  
<th>Attach RenderBuffers to a FrameBuffer</th>  
</tr>  
<tr>  
<td><h5><strong> GLvoid  glFramebufferRenderbuffer (GLenum target, GLenum attachment, GLenum renderbuffertarget, GLuint renderbuffer)</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: The target always will be GL_FRAMEBUFFER, this is just an internal convention for OpenGL.</li>
    <li><strong>attachment</strong>: This specifies which kind of render buffer we want to attach inside a frame buffer, this parameter can be:
<ul>  
    <li><strong>GL_COLOR_ATTACHMENT0</strong>: To attach a color render buffer;</li>
    <li><strong>GL_DEPTH_ATTACHMENT</strong>: To attach a depth render buffer;</li>
    <li><strong>GL_STENCIL_ATTACHMENT</strong>: To attach a stencil render buffer.</li>
</ul>  
</li>  
    <li><strong>renderbuffertarget</strong>: The renderbuffertarget always will be GL_RENDERBUFFER, this is just an internal convention for OpenGL.</li>
    <li><strong>renderbuffer</strong>: The name/id of the render buffer we want to attach.</li>
</ul>  
</td>  
</tr>  
</table>


The same question comes up: "How OpenGL will know for which frame buffer attach these render buffers?" Using the state machine! The last  frame buffer bound will receive these attachments.

OK, before move on, let's talk about the combination of Frame Buffer and Render Buffer. This is how they looks like:

<img src='http://db-in.com/images/framebuffer_example.jpg'  alt="Relationship between Frame Buffer and Render Buffers." title="framebuffer_example" width="600" height="600" class="size-full wp-image-1016" />

Internally OpenGL's always works with a frame buffer. This is called window-system-provided frame buffer and the frame buffer name/id 0 is reserved to it. The frame buffers which we control are know as application-created frame buffers.

The depth and stencil render buffers are optionals. But the color buffer is always enabled and as OpenGL's core always uses a color render buffer too, the render buffer name/id 0 is reserved to it. To optimize all the optional states, OpenGL gives to us an way to turn on and turn off some states (understanding as state every optional OpenGL's feature). To do this, we use these function:


<table width="675">  
<tr>  
<th>Turning ON/OFF the OpenGL' States</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glEnable(GLenum capability)</strong></h5><br/>  
<ul>  
    <li><strong>capability</strong>: The feature to be turned on. The values can be:
<ul>  
    <li><strong>GL_TEXTURE_2D</strong></li>
    <li><strong>GL_CULL_FACE</strong></li>
    <li><strong>GL_BLEND</strong></li>
    <li><strong>GL_DITHER</strong></li>
    <li><strong>GL_STENCIL_TEST</strong></li>
    <li><strong>GL_DEPTH_TEST</strong></li>
    <li><strong>GL_SCISSOR_TEST</strong></li>
    <li><strong>GL_POLYGON_OFFSET_FILL</strong></li>
    <li><strong>GL_SAMPLE_ALPHA_TO_COVERAGE</strong></li>
    <li><strong>GL_SAMPLE_COVERAGE</strong></li>
</ul></li>  
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glDisable(GLenum capability)</strong></h5><br/>  
<ul>  
    <li><strong>capability</strong>: The feature to be turned off. The values can be the same as <strong>glEnable</strong>.</li>
</ul>  
</td>  
</tr>  
</table>


Once we turned on/off a feature, this instruction will affect the entire OpenGL machine. Some people prefer to turn on a feature just for a while to use it and then turn off, but this is not advisable. It's expensive. The best way is turn on once and turn off once. Or if you really need, minimize the turn on/off in your application.

So, back to the depth and stencil buffer, if you need in your application to use one of them or both, try enable what you need once. As in our cube's example we just need a depth buffer, we could write:


<pre class="brush:csharp">  
// Doesn't matter if this is before or after
// we create the depth render buffer.
// The important thing is enable it before try
// to render something which needs to use it.
glEnable(GL_DEPTH_TEST);  
</pre>


Later I'll talk deeply about what the depth and stencil tests make and their relations with fragment shaders.

<br/><a name="buffer_object"></a>  
<h3>Buffer Objects</h3><a href="#list_contents">top</a>  
The buffer objects are optimized storage for our primitive's arrays. Has two kind of buffer objects, the first is that we store the array of vertices, because it the buffer objects is also known as Vertex Buffer Object (VBO). After you've created the buffer object you can destruct the original data, because the Buffer Object (BO) made a copy from it. We are used to call it VBO, but this kind of buffer object could hold on any kind of array, like array of normals or an array of texture coordinates or even an array of structures. To adjust the name to fit the right idea, some people also call this kind of buffer object as Array Buffer Object (ABO).

The other kind of buffer object is the Index Buffer Object (IBO). Do you remember the array of indices from the first part? (<a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-1/' #buffer_objects" target="_blank">click here to remember</a>). So the IBO is to store that kind of array. Usually the data type of array of indices is GLubyte or GLushort. Some devices have support up to GLuint, but this is like an extension, almost a plugin which vendors have to implement. The majority just support the default behavior (GLubyte or GLushort). So my advice is, always limit your array of indices to GLushort.

OK, now to create these buffers the process is very similar to frame buffer and render buffer. First you create one or more names/ids, later you bind one buffer object, and then you define the properties and data into it.


<table width="675">  
<tr>  
<th>Buffer Objects Creation</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glGenBuffers(GLsizei n, GLuint* buffers)</strong></h5><br/>  
<ul>  
    <li><strong>n</strong>: The number representing how many buffer objects's names/ids will be generated at once.</li>
    <li><strong>buffers</strong>: A pointer to a variable to store the generated names/ids. If more than one name/id was generated, this pointer will point to the start of an array.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glBindBuffer(GLenum target, GLuint buffer)</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: The target will define what kind of buffer object will be, VBO or IBO. The values can be:
<ul>  
    <li><strong>GL_ARRAY_BUFFER</strong>: This will set a VBO (or ABO, whatever).</li>
    <li><strong>GL_ELEMENT_ARRAY_BUFFER</strong>: This will set an IBO.</li>
</ul></li>  
    <li><strong>buffer</strong>: The name/id of the frame buffer to be bound.</li>
</ul>  
</td>  
</tr>  
</table>


Now is time to refine that illustration about the Port Crane's hooks. The BufferObject Hook is in reality a double hook. Because it can hold two buffer objects, one of each type: <strong>GL_ARRAY_BUFFER</strong> and <strong>GL_ELEMENT_ARRAY_BUFFER</strong>.

OK, once you have bound a buffer object is time to define its properties, or was better to say define its content. As the "BufferObject Hook" is a double one and you can have two buffer objects bound at same time, you need to instruct the OpenGL about the kind of buffer object you want to set the properties for.


<table width="675">  
<tr>  
<th>Buffer Objects Properties</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glBufferData(GLenum target, GLsizeiptr size, const GLvoid* data, GLenum usage)</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: Indicates for what kind of buffer you want to set the properties for. This param can be <strong>GL_ARRAY_BUFFER</strong> or <strong>GL_ELEMENT_ARRAY_BUFFER</strong>.</li>
    <li><strong>size</strong>: The size of the buffer in the basic units (bytes).</li>
    <li><strong>data</strong>: A pointer to the data.</li>
    <li><strong>usage</strong>: The usage kind. This is like a tip to help the OpenGL to optimize the data. This can be of three kinds:
<ul>  
    <li><strong>GL_STATIC_DRAW</strong>: This denotes an immutable data. You set it once and use the buffer often.</li>
    <li><strong>GL_DYNAMIC_DRAW</strong>: This denotes a mutable data. You set it once and update its content several times using it often.</li>
    <li><strong>GL_STREAM_DRAW</strong>: This denotes a temporary data. For who of you which is familiar with Objective-C, this is like an autorelease. You set it once and use few times. Later the OpenGL automatically will clean and destroy this buffer.</li>
</ul></li>  
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glBufferSubData(GLenum target, GLintptr offset, GLsizeiptr size, const GLvoid* data)</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: Indicates for what kind of buffer you want to set the properties for. This param can be <strong>GL_ARRAY_BUFFER</strong> or <strong>GL_ELEMENT_ARRAY_BUFFER</strong>.</li>
    <li><strong>offset</strong>: This number represent the offset which you will start to make changes into the previously defined buffer object. This number is given is basic units (bytes).</li>
    <li><strong>size</strong>: This number represent the size of the changes into the previously defined buffer object. This number is given is basic units (bytes).</li>
    <li><strong>data</strong>: A pointer to the data.</li>
</ul>  
</td>  
</tr>  
</table>


Now let's understand what these functions make. The first one, glBufferData, you use this function to set the content for your buffer object and its properties. If you choose the usage type of GL_DYNAMIC_DRAW, it means you want to update that buffer object later and to do this you need to use the second one, glBufferSubData.

When you use the glBufferSubData the size of your buffer object was previously defined, so you can't change it. But to optimize the updates, you can choose just a little portion of the whole buffer object to be updated.

Personally, I don't like to use GL_DYNAMIC_DRAW, if you stop to think about it you will see that doesn't exist in the 3D world an effect of behavior which only can be done changing the original vertex data, normal data or texture coordinate data. By using the shaders you can change almost everything related to those data. Using GL_DYNAMIC_DRAW certainly will be much more expansive than using a shaders's approach.  So, my advice here is: Avoid to use GL_DYNAMIC_DRAW as much as possible! Always prefer think in a way to achieve the same behavior using the Shaders features.

Once the Buffer Object was properly created and configured, it's very very simple to use it. All we need to do is bind the desired Buffer Objects. Remember we can bind only one kind of Buffer Object at a time. While the Buffer Objects stay bound, all the drawing commands we make  will use them. After the usage is a good idea unbind them.

Now let's move to the textures.

<br/><a name="textures"></a>  
<h2><strong>Textures</strong></h2><a href="#list_contents">top</a>  
Oh man, textures is very large topic in OpenGL. To don't increase the size of this tutorial more than it actually is, let's see the basic about the texture here. The advanced topics I'll let to the third part of this tutorial or an exclusive article.

The first thing I need to say you is about the Power of Two (POT). OpenGL ONLY accept POT textures. What that means? That means all the textures must to have the width and height a power of two value. Like 2, 4, 8, 16, 32, 64, 128, 256, 512 or 1024 pixels. To a texture, 1024 is a bigger size and normally indicate the maximum possible size of a texture. So all texture which will be used in OpenGL must to have dimensions like: 64 x 128 or 256 x 32 or 512 x 512, for example. You can't use 200 x 200 or 256 x 100. This is a rule to optimize the internal OpenGL processing in the GPU.

Another important thing to know about textures in OpenGL is the read pixel order. Usually image file formats store the pixel information starting at the upper left corner and moves through line by line to the lower right corner. File format like JPG, PNG, BMP, GIF, TIFF and others use this pixel order. But in the OpenGL this order is flipped upside down. The textures in OpenGL reads the pixels starting from the lower left corner and goes to the upper right corner.

<img src='http://db-in.com/images/pixel_order_example.jpg'  alt="OpenGL reads the pixels from lower left corner to the upper right corner." title="pixel_order_example" width="600" height="600" class="size-full wp-image-1076" />

So, to solve this little issue, we usually make a vertical flip on our images data before upload it to the OpenGL's core. If your programming language let you re-scale the images, this is equivalent to re-scale the height in -100%.

<img src='http://db-in.com/images/image_flipped_example.jpg'  alt="Image data must be flipped vertically to right fit into OpenGL." title="image_flipped_example" width="600" height="600" class="size-full wp-image-1070" />

Now, shortly about the logic, the textures in OpenGL works at this way: You have an image file, so you must to extract the binary color informations from it, the hexadecimal value. You could extract the alpha information too, OpenGL supports RGB and RGBA format. In this case you'll need to extract the hexadecimal + alpha value from your image. Store everything into an array of pixels.

With this array of pixel (also called texels, because will be used in a texture) you can construct an OpenGL's texture object. OpenGL will copy your array and store it in an optimized format to use in the GPU and in the frame buffer, if needed.

Now is the complex part, some people has criticized OpenGL so much by this approach. Personally I think this could be better too, but is what we have today. OpenGL has something called "Texture Units", by default any OpenGL implementation by vendors must supports up to 32 Texture Units. These Units represent a temporary link between the stored array of pixels and the actual render processing. You'll use the Texture Units inside the shaders, more specifically inside the fragment shaders. By default each shader can use up to 8 textures, some vendors's implementation support up to 16 textures per shader. Further, OpenGL has a limit to the pair of shader, though each shader could use up to 8 texture units, the pair of shader (vertex and fragment) are limited to use up to 8 texture units together. Confused? Look, if you are using the texture units in only one shader you are able to use up to 8. But if you are using texture units in both shader (different textures units), you can't use more than 8 texture units combined.

Well, OpenGL could hold on up to 32 Texture Units, which we'll use inside the shaders, but the shader just support up to 8, this doesn't make sense, right? Well, the point is that you can set up to 32 Texture Units and use it throughout many shaders. But if you need a 33th Texture Unit you'll need reuse a slot from the firsts 32.

Very confused! I know...  
Let's see if an visual explanation can clarify the point:  
<img src='http://db-in.com/images/texture_units_example.gif'  alt="You can define up to 32 Texture Units, but just up to 8 textures per shader." title="texture_units_example" width="600" height="438" class="size-full wp-image-1045" />

As you saw in that image, one Texture Unit can be used many times by multiple shaders pairs. This approach is really confused, but let's understand it by the Khronos Eyes: "Shader are really great!", a Khronos developer said to the other, "They are processed very fast by the GPU. Right! But the textures... hmmm..  textures data still on the CPU, they are bigger and heavy informations! Hmmm..  So we need a fast way to let the shaders get access the textures, like a bridge, or something temporary. Hmmm...  We could create an unit of the texture that could be processed directly in the GPU, just as the shaders. We could limit the number of current texture units running on the GPU. A cache in the GPU, it's fast, it's better. Right, to make the setup, the user bind a texture data to a texture unit and instruct his shaders to use that unit! Seems simple! Let's use this approach."

Normally the texture units are used in the fragment shader, but the vertex shader can also performs look up into a texture. This is not common but could be useful in some situations.

Two very important things to remember are: first, you must to activate the texture unit by using <strong>glActiveTexture()</strong> and then you bind the texture name/id using <strong>glBindTexture()</strong>. The second important thing is that even by default the OpenGL supports up to 32 texture units, you can't use a slot number higher than the maximum supported texture units in your vendor's implementation, so if your OpenGL implementation doesn't support more than 16 texture units, you just can use the texture units in range 0 - 15.

Well, the OpenGL texture units approach could be better, of course, but as I said, it's what we have for now! OK, again the code is very similar to the others above: You Generate a texture object, Bind this texture and set its properties. Here are the functions:


<table width="675">  
<tr>  
<th>Texture Creation</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glGenTextures(GLsizei n, GLuint* textures)</strong></h5><br/>  
<ul>  
    <li><strong>n</strong>: The number representing how many textures' names/ids will be generated at once.</li>
    <li><strong>textures</strong>: A pointer to a variable to store the generated names/ids. If more than one name/id was generated, this pointer will point to the start of an array.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glBindTexture(GLenum target, GLuint texture)</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: The target will define what kind of texture will be, a 2D texture or a 3D texture. The values can be:
<ul>  
    <li><strong>GL_TEXTURE_2D</strong>: This will set a 2D texture.</li>
    <li><strong>GL_TEXTURE_CUBE_MAP</strong>: This will set a 3D texture.</li>
</ul></li>  
    <li><strong>texture</strong>: The name/id of the texture to be bound.</li>
</ul>  
</td>  
</tr>  
</table>


Is so weirdest a "3D texture"? The first time I heard "3D texture" I thought:"WTF!". Well, because of this weirding, the OpenGL calls a 3D texture of a Cube Map. Sounds better! Anyway, the point is that represent a cube with one 2D texture in each face, so the 3D texture or cube map represents a collection of six 2D textures. And how we can fetch the the texels? With a 3D vector placed in the center of the cube. This subject need much more attention, so I'll skip the 3D textures here and will let this discussion to the third part of this tutorial. Let's focus on 2D texture. Using only the GL_TEXTURE_2D.

So, after we've created a 2D texture we need to set its properties. The Khronos group calls the OpenGL's core as "server", so when we define a texture data they say this is an "upload". To upload the texture data and set some properties, we use:


<table width="675">  
<tr>  
<th>Texture Properties</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glTexImage2D (GLenum target, GLint level, GLint internalformat, GLsizei width, GLsizei height, GLint border, GLenum format, GLenum type, const GLvoid* pixels)</strong></h5><br/>  
<ul>  
    <li><strong>target</strong>: To a 2D texture this always will be GL_TEXTURE_2D.</li>
    <li><strong>level</strong>: This parameter represents the mip map level. The base level is 0, for now let's use only 0.</li>
    <li><strong>internalformat</strong>: This represents the color format of the pixels. This parameter can be:
<ul>  
    <li><strong>GL_RGBA</strong>: For RGB + Alpha.</li>
    <li><strong>GL_RGB</strong>: For RGB only.</li>
    <li><strong>GL_LUMINANCE_ALPHA</strong>: For Red + Alpha only. In this case the red channel will represent the luminosity.</li>
    <li><strong>GL_LUMINANCE</strong>: For Red only. In this case the red channel will represent the luminosity.</li>
    <li><strong>GL_ALPHA</strong>: For Alpha only.</li>
</ul></li>  
    <li><strong>width</strong>: The width of the image in pixels.</li>
    <li><strong>height</strong>: The height of the image in pixels.</li>
    <li><strong>border</strong>: This parameter is ignored in OpenGL ES. Always use the value 0. This is just an internal constant to conserve the compatibly with the desktop versions.</li>
    <li><strong>format</strong>: The format must to have the same value of <strong>internalformat</strong>. Again, this is just an internal OpenGL convention.</li>
    <li><strong>type</strong>: This represent the data format of the pixels. This parameter can be:
<ul>  
    <li><strong>GL_UNSIGNED_BYTE</strong>: This format represent 4 Bytes per pixel, so you can use 8 bits for red, 8 bits for green, 8 bits for blue and 8 bits for alpha channels, for example. This definition is used with all color formats.</li>
    <li><strong>GL_UNSIGNED_SHORT_4_4_4_4</strong>: This format represents 2 bytes per pixel, so you can use 4 bits for red, 4 bits for green, 4 bits for blue and 4 bits for alpha channels, for example. This definition is used with RGBA only.</li>
    <li><strong>GL_UNSIGNED_SHORT_5_5_5_1</strong>: This format represents 2 bytes per pixel, so you can use 5 bits for red, 5 bits for green, 5 bits for blue and 1 bit for alpha channels, for example. This definition is used with RGBA only.</li>
    <li><strong>GL_UNSIGNED_SHORT_5_6_5</strong>: This format represents 2 bytes per pixel, so you can use 5 bits for red, 6 bits for green and 5 bits for blue, for example. This definition is used with RGB only.</li>
</ul></li>  
    <li><strong>pixels</strong>: The pointer to your array of pixels.</li>
</ul>  
</td>  
</tr>  
</table>


Wow! A lot of parameters!  
OK, but is not hard to understand. First of all, same behavior of the others "Hooks", a call to glTexImage2D will set the properties for the last texture bound.

About the mip map, it is another OpenGL feature to optimize the render time. In few words, what it does is progressively create smaller copies of the original texture until an insignificant copy of 1x1 pixel. Later, during the rasterize process, OpenGL can choose the original or the one of the copies to use depending on the final size of the 3D object in relation to the view. For now, don't worry with this feature, probably I'll create an article only to talk about the texture with OpenGL.

After the mip map, we set the color format, the size of the image, the format of our data and finally our pixel data. The 2 bytes per pixel optimized data formats is the best way to optimize your texture, use it always you can. Remember which the color you use in the OpenGL can't exceed the color range and format of your device and EGL context.

OK, now we know how to construct a basic texture and how it works inside OpenGL. So now let's move to the Rasterize.

<br/><a name="rasterize"></a>  
<h2><strong>Rasterize</strong></h2><a href="#list_contents">top</a>  
The Rasterize in the strict sense is only the process which the OpenGL takes a 3D object and convert its bunch of maths into a 2D image. Later, each fragment of this visible area will be processed by the fragment shader.

Looking at that Programmable Pipeline illustration at the beginning of this tutorial, you can see the Rasterize is just a small step through the graphics pipeline. So why it is so important? I like to say which everything that comes after the Rasterize step is also a Rasterize process, because all that is done later on is also to construct a final 2D image from a 3D object. OK, anyway.

The fact is the Rasterize is the process of creating an image from a 3D object. The Rasterize will occurs to each 3D object in the scene and will update the frame buffer. You can do interferences by many ways in the Rasterize process.

<br/><a name="face_culling"></a>  
<h3>Face Culling</h3><a href="#list_contents">top</a>  
Now is time to talk about the Culling, Front Facing and Back Facing. OpenGL works with methods to find and discard the not visible faces. Imagine a simple plane in your 3D application. Let's say you want this plane be visible just by one side (because it represent a wall or a floor, whatever). By default OpenGL will render the both sides of that plane. To solve this issue you can use the culling. Based on the order of vertices OpenGL can determine which is the front and the back face of your mesh (more precisely, it calculates the front and back face of each triangle) and using the culling you can instruct OpenGL to ignore one of these sides (or even both). Look at this picture:

<img src='http://db-in.com/images/culling_example.jpg'  alt="If the culling was enabled, the default behavior will treat clock wise order as a back face." title="culling_example" width="600" height="378" class="size-full wp-image-1099" />

This feature called culling is completely flexible, you have at least three ways to do the same thing. That picture show only one way, but the most important is understand how it works. In the picture's case, a triangle is composed by vertex 1, vertex 2 and vertex 3. The triangle at the left is constructed using the order of {1,2,3} and that one at the right is formed by the order {1,3,2}. By default the culling will treat triangles formed in the Counter ClockWise as a Front Face and it will not be culled. Following this same behavior, in the right side of the image, the triangle formed in ClockWise will be treated as Back Face and it will be culled (ignored in rasterization process).

To use this feature you need to use <strong>glEnable</strong> function using the parameter <strong>GL_CULL_FACE</strong> and doing this the default behavior will be the explained above. But if you want to customize it, you can use these functions:


<table width="675">  
<tr>  
<th>Cull Face properties</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glCullFace(GLenum mode)</strong></h5><br/>  
<ul>  
    <li><strong>mode</strong>: Indicates which face will be culled. This parameter can be:
<ul>  
    <li><strong>GL_BACK</strong>: This will ignore the back faces. This is the default behavior.</li>
    <li><strong>GL_FRONT</strong>: This will ignore the front faces.</li>
    <li><strong>GL_FRONT_AND_BACK</strong>: This will ignore both front and back faces (don't ask me why someone will want to exclude the both sides, even knowing which this will produce no render. I'm still trying to figure out the reason of this silly setup until today).</li>
</ul></li>  
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glFrontFace(GLenum mode)</strong></h5><br/>  
<ul>  
    <li><strong>mode</strong>: Indicates how the OpenGL will define the front face (and obviously also the back face). This parameter can be:
<ul>  
    <li><strong>GL_CCW</strong>: This will instruct OpenGL to treat triangles formed in counter clock wise as a Front Face. This is the default behavior.</li>
    <li><strong>GL_CW</strong>: This will instruct OpenGL to treat triangles formed in clock wise as a Front Face.</li>
</ul></li>  
</ul>  
</td>  
</tr>  
</table>


As you can imagine, if you set the <strong>glCullFace(GL_FRONT)</strong> and <strong>glFrontFace(GL_CW)</strong> you will achieve the same behavior as default. Another way to change the default behavior is by changing the order which your 3D objects are constructed, but of course this is much more laborious, because you need to change your array of indices.

The culling is the first thing to happens in the Rasterize step, so this can determine if a Fragment Shader (the next step) will be processed or not.

<br/><a name="per-fragment"></a>  
<h3>Per-Fragment Operations</h3><a href="#list_contents">top</a>  
Now let's refine a little our programmable pipeline diagram at the top of this tutorial, more specifically what happens in the process of post fragment shader.

<img src='http://db-in.com/images/rasterize_pipepline_example.png'  alt="Refined Per-Fragment Operations into the the programmable pipeline." title="rasterize_pipepline_example" width="600" height="600" class="size-full wp-image-1098" />

Between the Fragment Shader and the Scissor Test exist one little omitted step. Something called "Pixel Ownership Test". This is an internal step. It will decide the ownership of a pixel between the OpenGL internal Frame Buffer and the current EGL's context. This is an insignificant step to us. You can't use it to anything, I just told this to you know what happens internally. To us, developers, this step is completely ignored.

As you saw, the only step which you don't have access is the Logicop. The Logicop is an internal process which includes things like clamp values to 0.0 - 1.0 range, process the final color to the frame buffer after all per-fragment operations, additional Multisample and other kind of internal things. You don't need to worry about that. We need to focus on purple boxes.

The purple boxes indicate processes which is disabled by default, you need to enable each of them using the <strong>glEnable</strong> function, if you want to use them, of course. You can look again at the <strong>glEnable</strong> parameters, though just to make this point clear, in short words the purple boxes at this image represent the following parameters and means:  

<ul>  
    <li><strong>Scissor Test</strong>: GL_SCISSOR_TEST - This can crop the image, so every fragment outside the scissor area will be ignored.</li>
    <li><strong>Stencil Test</strong>: GL_STENCIL_TEST - Works like a mask, the mask is defined by a black-white image where the white pixels represent the visible area. So every fragment placed on the black area will be ignored. This requires a stencil render buffer to works.</li>
    <li><strong>Depth Test</strong>: GL_DEPTH_TEST - This test compares the Z depth of the current 3D object against the others Z depths previously rendered. The fragment with a depth higher than another will be ignored (that means, more distant from the viewer). This will be done using a grey scale image. This requires a depth render buffer to works.</li>
    <li><strong>Blend</strong>: GL_BLEND - This step can blend the new fragment with the existing fragment into the color buffer.</li>
    <li><strong>Dither</strong>: GL_DITHER - This is a little OpenGL's trick. In the systems wich color available to the frame buffer is limited, this step can optimize the color usage to appears to have more colors than has in real. The Dither has no configuration, you just choose to use it or not.</li>
</ul>


To each of them, OpenGL gives few functions to setup the process like <strong>glScissor</strong>, <strong>glBlendColor</strong> or <strong>glStencilFunc</strong>. There are more than 10 functions and I'll not talk about they here, maybe in another article. The important thing to understand here is the process. I told you about the default behavior, like the black and white in stencil buffer, but by using those functions you can customize the processing, like change the black and white behavior on the stencil buffer.

Look again at the programmable pipeline at the top. Each time you render a 3D object, that entire pipeline will occur from the <strong>glDraw*</strong> until the frame buffer, but does not enter in EGL API. Imagine a complex scene, a game scene, like a Counter Strike scene. You could render tens, maybe hundreds 3D object to create only one single static image. When you render the first 3D object, the frame buffer will begin to be filled. If the subsequents 3D objects had their fragments ignored by one or more of the Fragment Operations, then the ignored fragment will not be placed in the frame buffer, but remember that this action will not exclude the fragments which are already in the frame buffer. The final Counter Strike scene is a single 2D image resulting for many shaders, lights, effects and 3D objects. So every 3D object will have its vertex shader processed, maybe also its fragment shader, but this doesn't means that its resulting image will be really visible.

Well, now you understand why I said the rasterization process include more than only one single step in the diagram. Rasterize is everything between the vertex shader and the frame buffer steps.

Now let's move to the most important section, the shaders!

<br/><a name="shaders"></a>  
<h2><strong>Shaders</strong></h2><a href="#list_contents">top</a>  
Here we are! The greatest invention of 3D world!  
If you've read the first part of this serie of tutorials and read all this part until here, I think you have now a good idea of what the shaders are and what they do. Just to refresh our memories, let's remember a little:  

<ul>  
    <li>Shaders use the GLSL or GLSL ES, a compact version of the first one.</li>
    <li>Shaders always work in pairs, a Vertex Shader (VSH) and a Fragment Shader (FSH).</li>
    <li>That pair of shader will be processed every time you submit a render command, like <strong>glDrawArrays</strong> or <strong>glDrawElements</strong>.</li>
    <li>VSH will be processed per-vertex, if your 3D object has 8 vertices, so the vertex shader will be processed 8 times. The VSH is responsible by determine the final position of a vertex.</li>
    <li>FSH will be processed in each visible fragment of your objects, remember that FSH is processed before the "Fragment Operations" in the graphics pipeline, so the OpenGL doesn't knows yet what object is in front of others, I mean, even the fragments behind the others will be processed. The FSH is responsible by define the final color of a fragment.</li>
    <li>VSH and FSH must be compiled separately and linked together within a Program Object. You can reuse a compiled shader into multiple Program Objects, but can link only one kind of shader (VSH and FSH) at each Program Object.</li>
</ul>


<br/><a name="shader_program"></a>  
<h3>Shader and Program Creation</h3><a href="#list_contents">top</a>  
OK, first let's talk about the process of creating a shader object, put some source code in it and compile it.  As any other OpenGL's object, we first create a nane/id to it and then set its properties. In comparison to the other OpenGL objects, the additional process here is the compiling. Remember that the shaders will be processed by the GPU and to optimize the process the OpenGL compiles your source code into a binary format. Optionally, if you have a previously compiled shader in a binary file you could load it directly instead to load the source and compile. But for now, let's focus on the compiling process.  
These are the functions related to shader creation process:


<table width="675">  
<tr>  
<th>Shader Object Creation</th>  
</tr>  
<tr>  
<td><h5><strong>GLuint glCreateShader(GLenum type)</strong></h5><br/>  
<ul>  
    <li><strong>type</strong>: Indicates what kind of shader will be created. This parameter can be:
<ul>  
    <li><strong>GL_VERTEX_SHADER</strong>: To create a Vertex Shader.</li>
    <li><strong>GL_FRAGMENT_SHADER</strong>: To create a Fragment Shader.</li>
</ul></li>  
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glShaderSource(GLuint shader, GLsizei count, const GLchar** string, const GLint* length)</strong></h5><br/>  
<ul>  
    <li><strong>shader</strong>: The shader name/id generated by the <strong>glCreateShader</strong> function.</li>
    <li><strong>count</strong>: Indicates how many sources you are passing at once. If you are uploading only one shader source, this parameter must be 1.</li>
    <li><strong>string</strong>: The source of your shader(s). This parameter is a double pointer because you can pass an array of C strings, where each element represent a source. The pointed array should has the same length as the <strong>count</strong> parameter above.</li>
    <li><strong>length</strong>: A pointer to an array which each element represent the number of chars into each C string of the above parameter. This array must has the same number of elements as specified in the <strong>count</strong> parameter above. This parameter can also be NULL. In this case, each element in the <strong>string</strong> parameter above must be null-terminated.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glCompileShader(GLuint shader)</strong></h5><br/>  
<ul>  
    <li><strong>shader</strong>: The shader name/id generated by the <strong>glCreateShader</strong> function.</li>
</ul>  
</td>  
</tr>  
</table>


As you saw, this step is easy. You create a shader name/id, make the upload of the source code to it and then compile it. If you upload a source code into a shader which had another source into it, the old source will be completely replaced. Once the shader has been compiled, you can't change the source code anymore using the <strong>glShaderSource</strong>.

Each Shader object has a GLboolean status to indicate if it is compiled or not. This status will be set to TRUE if the shader was compiled with no errors. This status is good for you use in debug mode of your application to check if the shaders are being compiled correctly. Jointly with this check, it's a good idea you query the info log which is provided. The functions are <strong>glGetShaderiv</strong> to retrieve the status and <strong>glGetShaderInfoLog</strong> to retrieve the status message. I'll not place the functions and parameters here, but I'll show this shortly in a code example.

Is important tell you the OpenGL names/ids reserved to the shaders are one single list. For example, if you generate a VSH which has the name/id 1 this number will never be used again, if you now create a FSH, the new name/id will probably be 2 and so on. Never a VSH will has the same name/id of a FSH, and vice versa.

Once you have a pair of shaders correctly compiled is time to create a Program Object to place both shaders into it. The process to create a program object is similar to the shader process. First you create a Program Object, then you upload something (in this case, you place the compiled shaders into it) and finally you compile the program (in this case we don't use the word "compile", we use "link"). The Program will be linked to what? The Program will link the shaders pair togheter and be link itself to the OpenGL's core. This process is very important, because is into it that many verifications on your shaders occur. Just as the shaders, the Programs also has a link status and a link info log which you can use to check the errors. Once a Program was linked with success, you can be sure: your shaders will work correctly. Here are the functions to Program Object:


<table width="675">  
<tr>  
<th>Program Object Creation</th>  
</tr>  
<tr>  
<td><h5><strong>GLuint glCreateProgram(void)</strong></h5><br/>  
<ul>  
    <li>This function requires no parameter. This is because only exist one kind of Program Object, unlike the shaders. Plus, instead to take the memory location to one variable, this function will return the name/id directly, this different behavior is because you can't create more than one Program Object at once, so you don't need to inform a pointer.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glAttachShader(GLuint program, GLuint shader)</strong></h5><br/>  
<ul>  
    <li><strong>program</strong>: The program name/id generated by the <strong>glCreateProgram</strong> function.</li>
    <li><strong>shader</strong>: The shader name/id generated by the <strong>glCreateShader</strong> function.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glLinkProgram(GLuint program)</strong></h5><br/>  
<ul>  
    <li><strong>program</strong>: The program name/id generated by the <strong>glCreateProgram</strong> function.</li>
</ul>  
</td>  
</tr>  
</table>


In the <strong>glAttachShader</strong> you don't have any parameter to identify if the shader is a Vertex or Fragment one. You remember the shaders names/ids are one single list, right?.  The OpenGL will atumatically identify the type of the shaders based on their unique names/ids. So the important part is you call the <strong>glAttachShader</strong> twice, one to VSH and other to FSH. If you attach two VSH or two FSH, the program will not be properly linked, also if you attach more than two shaders, the program will fail in linking.

You could create many programs, but how the OpenGL will know which program to use when you call a <strong>glDraw*</strong>? The OpenGL's Crane doesn't has an arm and a hook to programs objects, right? So how the OpenGL will know? Well, the programs are our exception. OpenGL doesn't has a bind function to it, but works with programs at the same way as a hook. When you want to use a program you call this function:


<table width="675">  
<tr>  
<th>Program Object Usage</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glUseProgram(GLuint program)</strong></h5><br/>  
<ul>  
    <li><strong>program</strong>: The program name/id generated by the <strong>glCreateProgram</strong> function.</li>
</ul>  
</td>  
</tr>  
</table>


After calling the function above, every subsequent call to <strong>glDraw*</strong> functions will use the program which is currently in use. As any other <strong>glBind*</strong> function, the name/id 0 is reserved to OpenGL and if you call <strong>glUseProgram(0)</strong> this will unbind any current program.

Now is time to code, any OpenGL application which you create will have a code like this:

<div style="overflow:auto; height:600px"><pre class="brush:csharp">

GLuint _program;

GLuint createShader(GLenum type, const char **source)  
{
    GLuint name;

    // Creates a Shader Object and returns its name/id.
    name = glCreateShader(type);

    // Uploads the source to the Shader Object.
    glShaderSource(name, 1, &source, NULL);

    // Compiles the Shader Object.
    glCompileShader(name);

    // If you are running in debug mode, query for info log.
    // DEBUG is a pre-processing Macro defined to the compiler.
    // Some languages could not has a similar to it.
#if defined(DEBUG)

    GLint logLength;

    // Instead use GL_INFO_LOG_LENGTH we could use COMPILE_STATUS.
    // I prefer to take the info log length, because it'll be 0 if the
    // shader was successful compiled. If we use COMPILE_STATUS
    // we will need to take info log length in case of a fail anyway.
    glGetShaderiv(name, GL_INFO_LOG_LENGTH, &logLength);

    if (logLength > 0)
    {
        // Allocates the necessary memory to retrieve the message.
        GLchar *log = (GLchar *)malloc(logLength);

        // Get the info log message.
        glGetShaderInfoLog(name, logLength, &logLength, log);

        // Shows the message in console.
        printf("%s",log);

        // Frees the allocated memory.
        free(log);
    }
#endif

    return name;
}

GLuint createProgram(GLuint vertexShader, GLuint fragmentShader)  
{
    GLuint name;

    // Creates the program name/index.
    name = glCreateProgram();

    // Will attach the fragment and vertex shaders to the program object.
    glAttachShader(name, vertexShader);
    glAttachShader(name, fragmentShader);

    // Will link the program into OpenGL's core.
    glLinkProgram(_name);

#if defined(DEBUG)

    GLint logLength;

    // This function is different than the shaders one.
    glGetProgramiv(name, GL_INFO_LOG_LENGTH, &logLength);

    if (logLength > 0)
    {
        GLchar *log = (GLchar *)malloc(logLength);

        // This function is different than the shaders one.
        glGetProgramInfoLog(name, logLength, &logLength, log);

        printf("%s",log);

        free(log);
    }
#endif

    return name;
}

void initProgramAndShaders()  
{
    const char *vshSource = "... Vertex Shader source using SL ...";
    const char *fshSource = "... Fragment Shader source using SL ...";

    GLuint vsh, fsh;

    vsh = createShader(GL_VERTEX_SHADER, &vshSource);
    fsh = createShader(GL_FRAGMENT_SHADER, &fshSource);

    _program = createProgram(vsh, fsh);

    // Clears the shaders objects.
    // In this case we can delete the shader because we
    // will not use they anymore and once compiled,
    // the OpenGL stores a copy of they into the program object.
    glDeleteShader(vsh);
    glDeleteShader(fsh);

    // Later you can use the _program variable to use this program.
    // If you are using an Object Oriented Programming is better make
    // the program variable an instance variable, otherwise is better make
    // it a static variable to reuse it in another functions.
    // glUseProgram(_program);
}
</pre></div>

<br/>  
Here I've made a minimal elaboration to make it more reusable, separating the functions which creates OpenGL objects. For example, instead to rewrite the code of shader creation, we can simply call the function <strong>createShader</strong> and inform the kind of shader we want and the source from it. The same with programs. Of course if you are using an OOP language you could elaborate it much more, creating separated classes for Program Objects and Shader Objects, for example.

This is the basic about the shader and program creation, but we have much more to see. Let's move to the Shader Language (SL). I'll treat specifically the GLSL ES, the compact version of OpenGL Shader Language for Embedded Systems.

<br/><a name="shader_language"></a>  
<h3>Shader Language</h3><a href="#list_contents">top</a>  
The shader language is very similar to C standard. The variables declarations and function syntax are the same, the if-then-else and loops has the same syntax too, the SL even accept preprocessor Macros, like #if, #ifdef, #define and others. The shader language was made to be as fast as possible. So be careful about the usage of loops and conditions, they are very expansive. Remember the shaders will be processed by the GPU and the floating-point calculations are optimized. To explore this great improvement, SL has exclusive data types to work with 3D world:


<table width="675">  
<tr>  
<th>SL's Data Type</th><th>Same as C</th><th>Description</th>  
</tr>  
<tr><td><strong>void</strong></td><td>void</td><td>Can represent any data type</td></tr>  
<tr><td><strong>float</strong></td><td>float</td><td>The range depends on the precision.</td></tr>  
<tr><td><strong>bool</strong></td><td>unsigned char</td><td>0 to 1</td></tr>  
<tr><td><strong>int</strong></td><td>char/short/int</td><td>The range depends on the precision.</td></tr>  
<tr><td><strong>vec2</strong></td><td>-</td><td>Array of 2 float. {x, y}, {r, g}, {s, t}</td></tr>  
<tr><td><strong>vec3</strong></td><td>-</td><td>Array of 3 float. {x, y, z}, {r, g, b}, {s, t, r}</td></tr>  
<tr><td><strong>vec4</strong></td><td>-</td><td>Array of 4 float. {x, y, z, w}, {r, g, b, a}, {s, t, r, q}</td></tr>  
<tr><td><strong>bvec2</strong></td><td>-</td><td>Array of 2 bool. {x, y}, {r, g}, {s, t}</td></tr>  
<tr><td><strong>bvec3</strong></td><td>-</td><td>Array of 3 bool. {x, y, z}, {r, g, b}, {s, t, r}</td></tr>  
<tr><td><strong>bvec4</strong></td><td>-</td><td>Array of 4 bool. {x, y, z, w}, {r, g, b, a}, {s, t, r, q}</td></tr>  
<tr><td><strong>ivec2</strong></td><td>-</td><td>Array of 2 int. {x, y}, {r, g}, {s, t}</td></tr>  
<tr><td><strong>ivec3</strong></td><td>-</td><td>Array of 3 int. {x, y, z}, {r, g, b}, {s, t, r}</td></tr>  
<tr><td><strong>ivec4</strong></td><td>-</td><td>Array of 4 int. {x, y, z, w}, {r, g, b, a}, {s, t, r, q}</td></tr>  
<tr><td><strong>mat2</strong></td><td>-</td><td>Array of  4 float. Represent a matrix of 2x2.</td></tr>  
<tr><td><strong>mat3</strong></td><td>-</td><td>Array of  9 float. Represent a matrix of 3x3.</td></tr>  
<tr><td><strong>mat4</strong></td><td>-</td><td>Array of  16 float. Represent a matrix of 4x4.</td></tr>  
<tr><td><strong>sampler2D</strong></td><td>-</td><td>Special type to access a 2D texture</td></tr>  
<tr><td><strong>samplerCube</strong></td><td>-</td><td>Special type to access a Cube texture (3D texture)</td></tr>  
</table>


All the vector data types (<strong>vec*</strong>, <strong>bvec*</strong> and <strong>ivec*</strong>) can have their elements accessed by either using "." syntax or array subscripting syntax "[x]". In the above table you saw the sequences <strong>{x, y, z, w}, {r, g, b, a}, {s, t, r, q}</strong>. They are the accessors for the vector elements. For example, .xyz could represent the first three elements of a vec4, but you can't use .xyz in a vec2, because .xyz in a vec2 is out of bounds, for a vec2 just .xy could be used. You also can change the order to achieve your results, for example .yzx of a vec4 means you are querying the second, third and first elements, respectively. The reason for three different sequences is because a <strong>vec</strong> data type can be used to represent vectors (x,y,z,w), colors (r,g,b,a) or even texture coordinates (s,t,r,q). The important thing is that you can't mix these sets, for example you can't use .xrt. The following example can help:


<pre class="brush:csharp">  
vec4 myVec4 = vec4(0.0, 1.0, 2.0, 3.0);  
vec3 myVec3;  
vec2 myVec2;

myVec3 = myVec4.xyz;        // myVec3 = {0.0, 1.0, 2.0};  
myVec3 = myVec4.zzx;        // myVec3 = {2.0, 2.0, 0.0};  
myVec2 = myVec4.bg;            // myVec2 = {2.0, 1.0};  
myVec4.xw = myVec2;            // myVec4 = {2.0, 1.0, 2.0, 1.0};  
myVec4[1] = 5.0;            // myVec4 = {2.0, 5.0, 2.0, 1.0};  
</pre>


Is very simple.  
Now, about the conversions, you need to take care with some things. The SL uses something called Precision Qualifiers to define the range of minimum and maximum values to a data type.

Precision Qualifiers are little instructions which you can use in front of any variable declaration. As any data range, this depends on the hardware capacity. So, the following table is about the minimum range necessary to SL. Some vendors can increase these ranges:


<table width="675">  
<tr>  
<th>Precision</th><th>Floating Point Range</th><th>Integer Range</th>  
</tr>  
<tr><td><strong>lowp</strong></td><td>-2.0 to 2.0</td><td>-256 to 256</td></tr>  
<tr><td><strong>mediump</strong></td><td>-16,384.0 to 16,384.0</td><td>-1,024 to 1,024</td></tr>  
<tr><td><strong>highp</strong></td><td>-4,611,686,018,427,387,904.0 to 4,611,686,018,427,387,904.0</td><td>-65,536 to 65,536</td></tr>  
</table>


Instead declare a qualifier at each variable you can also define global qualifiers by using the keyword <strong>precision</strong>. The Precision Qualifiers can help when you need to convert between data types, this should be avoided, but if you really need, use the Precision Qualifiers to help you. For example, to convert a float to an int you should use a mediump float and a lowp int, if you try to convert a lowp float (range -2.0 to 2.0) to a lowp int all result you will have is between -2 and 2 integers. And to convert you must use a build-in function to the desired data type. The following code can help:


<pre class="brush:csharp">  
precision mediump float;  
precision lowp int;

vec4 myVec4 = vec4(0.0, 1.0, 2.0, 3.0);  
ivec3 myIvec3;  
mediump ivec2 myIvec2;

// This will fail. Because the data types are not compatible.
//myIvec3 = myVec4.zyx;

myIvec3 = ivec3(myVec4.zyx);    // This is OK.  
myIvec2.x = myIvec3.y;            // This is OK.

myIvec2.y = 1024;

// This is OK too, but the myIvec3.x will assume its maximum value.
// Instead 1024, it will be 256, because the precisions are not
// equivalent here.
myIvec3.x = myIvec2.y;  
</pre>


One of the great advantages and performance gain of working directly in the GPU is the operations with the floating-point. You can do multiplications or other operation with the floating-point very easily. Matrices types, vectors types and float type are fully compatibles, respecting their dimensions, of course. You could make complex calculations, like matrices multiplications, in a single line, just like these:


<pre class="brush:csharp">  
mat4 myMat4;  
mat3 myMat3;  
vec4 myVec4 = vec4(0.0, 1.0, 2.0, 3.0);  
vec3 myVec3 = vec3(-1.0, -2.0, -3.0);  
float myFloat = 2.0;

// A mat4 has 16 elements, could be constructed by 4 vec4.
myMat4 = mat4(myVec4,myVec4,myVec4,myVec4);

// A float will multiply each vector value.
myVec4 = myFloat * myVec4;

// A mat4 multiplying a vec4 will result in a vec4.
myVec4 = myMat4 * myVec4;

// Using the accessor, we can multiply two vector of different orders.
myVec4.xyz = myVec3 * myVec4.xyz;

// A mat3 produced by a mat4 will take the first 9 elements.
myMat3 = mat3(myMat4);

// A mat3 multiplying a vec3 will result in a vec3.
myVec3 = myMat3 * myVec3;  
</pre>


You can also use array of any data type and even can construct structs, just like in C. The SL defines which every shader must have one function <strong>void main()</strong>. The shader execution will start by this function, just like C. Any shader which doesn't has this function will not be compiled. A function in SL works exactly as in C. Just remember that SL is an inline language, I mean, if you've wrote a function before call it, it's OK, otherwise the call will fail. So if you have more functions in your shader, remember which the <strong>void main()</strong> must be the last to be written.

Now is time to go deeply and understand what exactly the vertex and fragment shaders make.

<br/><a name="vertex_fragment"></a>  
<h3>Vertex and Fragment Structures</h3><a href="#list_contents">top</a>  
First of all, let's take a look into the Shaders Pipeline and then I'll introduce you the Attributes, Uniforms, Varyings and Built-In Functions.

<img src='http://db-in.com/images/shader_pipeline_example.jpg'  alt="The shaders pipeline." title="shader_pipeline_example" width="600" height="600" class="size-full wp-image-1170" />

Your VSH should always have one or more Attributes, because the Attributes is used to construct the vertices of your 3D object, only the attributes can be defined per-vertex. To define the final vertex position you'll use the built-in variable gl_Position. If you are drawing a 3D point primitive you could also set the gl_PointSize. Later on, you'll set the gl_FragColor built-in variable in FSH.

The Attributes, Uniforms and Varyings construct the bridge between the GPU processing and your application in CPU. Before you make a render (call <strong>glDraw*</strong> functions), you'll probably set some values to the Attributes in VSH. These values can be constant in all vertices or can be different at each vertex. By default, any implementation of OpenGL's programmable pipeline must supports at least 8 Attributes.

You can't set any variable directly to the FSH, what you need to do is set a Varying output into the VSH and prepare your FSH to receive that variable. This step is optional, as you saw in the image, but in reality is very uncommon construct a FSH which doesn't receive any Varying. By default, any implementation of OpenGL's programmable pipeline must to support at least 8 Varyings.

Another way to comunicate with the shader is by using the Uniforms, but as the name suggest, the Uniforms are constants throughout all the shaders processing (all vertices and all fragments). A very common usage to uniforms is the samplers, you remember sampler data types, right? They are used to hold our Texture Units. You remember the Texture Units too, right? Just to make this point clear, samplers data types should be like int data types, but is a special kind reserved to work with textures. Just it. The minimum supported Uniforms is different from each shader type. The VSH supports at least 128 Uniforms, but the FSH supports at least 16 Uniforms.

Now, about the Built-In Variables, OpenGL defines few variables which is mandatory to us at each shader. The VSH must define the final vertex position, this is done through the variable <strong>gl_Position</strong>, if current drawing primitive is a 3D point is a good idea to set the <strong>gl_PointSize</strong> too. The <strong>gl_PointSize</strong> will instruct the FSH about how many fragments each point will affect, or in simple words, the size in the screen of a 3D point. This is very useful to make particle effects, like fire. In the VSH has built-in read-only variable, like the <strong>gl_FrontFacing</strong>. This variable is of bool data type, it instructs if the current vertex is front facing or not.

In the FSH the built-in output variable is <strong>gl_FragColor</strong>. For compatibility with the OpenGL's desktop versions, the <strong>gl_FragData</strong> can also be used. <strong>gl_FragData</strong> is an array which is related to the drawable buffers, but as OpenGL ES has only one internal drawable buffer, this variable must always be used as <strong>gl_FragData[0]</strong>. My advice here is to forget it and focus on <strong>gl_FragColor</strong>.

About the read-only built-in variables, the FSH has three of them: <strong>gl_FrontFacing</strong>, <strong>gl_FragCoord</strong> and <strong>gl_PointCoord</strong>. The <strong>gl_FrontFacing</strong> is equal in the VSH, it's a bool which indicates if the current fragment is front facing or not. The <strong>gl_FragCoord</strong> is vec4 data type which indicates the fragment coordinate relative to the window (window here means the actual OpenGL's view. The <strong>gl_PointCoord</strong> is used when you are rendering 3D points. In cases when you specify <strong>gl_PointSize</strong> you can use the <strong>gl_PointCoord</strong> to retrieve the texture coordinate to the current fragment. For example, a point size is always square and given in pixels, so a size of 16 represent a point formed by 4 x 4 pixels. The <strong>gl_PointCoord</strong> is in range of 0.0 - 1.0, exactly like a texture coordinate information.

The most important in the built-in output variables is the final values. So you could change the value of <strong>gl_Position</strong> several times in a VSH, the final position will be the final value. The same is true for <strong>gl_FragColor</strong>.

The following table shows the built-in variables and their data types:


<table width="675">  
<tr>  
<th>Built-In Variable</th><th>Precision</th><th>Data Type</th>  
</tr>  
<tr>  
<th colspan=3>Vertex Shader Built-In Variables</th>  
</tr>  
<tr><td><strong>gl_Position</strong></td><td>highp</td><td>vec4</td></tr>  
<tr><td><strong>gl_FrontFacing</strong></td><td>-</td><td>bool</td></tr>  
<tr><td><strong>gl_PointSize</strong></td><td>mediump</td><td>float</td></tr>  
<tr>  
<th colspan=3>Fragment Shader Built-In Variables</th>  
</tr>  
<tr><td><strong>gl_FragColor</strong></td><td>mediump</td><td>vec4</td></tr>  
<tr><td><strong>gl_FrontFacing</strong></td><td>-</td><td>bool</td></tr>  
<tr><td><strong>gl_FragCoord</strong></td><td>mediump</td><td>vec4</td></tr>  
<tr><td><strong>gl_PointCoord</strong></td><td>mediump</td><td>vec2</td></tr>  
</table>


Is time to construct a real shader. The following code constructs a Vertex and a Fragment Shader which uses two texture maps. Let's start with the VSH.


<pre class="brush:csharp">  
precision mediump float;  
precision lowp int;

uniform mat4        u_mvpMatrix;

attribute vec4        a_vertex;  
attribute vec2        a_texture;

varying vec2        v_texture;

void main()  
{
    // Pass the texture coordinate attribute to a varying.
    v_texture = a_texture;

    // Here we set the final position to this vertex.
    gl_Position = u_mvpMatrix * a_vertex;
}
</pre>


And now the corresponding FSH:


<pre class="brush:csharp">  
precision mediump float;  
precision lowp int;

uniform sampler2D    u_maps[2];

varying vec2        v_texture;

void main()  
{
    // Here we set the diffuse color to the fragment.
    gl_FragColor = texture2D(u_maps[0], v_texture);

    // Now we use the second texture to create an ambient color.
    // Ambient color doesn't affect the alpha channel and changes
    // less than half the natural color of the fragment.
    gl_FragColor.rgb += texture2D(u_maps[1], v_texture).rgb * .4;
}
</pre>


Great, now is time to back to OpenGL API and prepare our Attributes and Uniforms, remember that we don't have directly control to the Varyings, so we must to set an Attribute to be send to a Varying during the VSH execution.

<br/><a name="attributes_uniforms"></a>  
<h3> Setting the Attributes and Uniforms</h3><a href="#list_contents">top</a>  
To identify any variable inside the shaders, the Program Object defines locations to its variables (location is same as index). Once you know the final location to an Attribute or Uniform you can use that location to set its value.

To setup an Uniform, OpenGL gives to us only one way: after the linking, we retrieve a location to the desired Uniform based on its name inside the shaders. To setup the Attributes, OpenGL gives to us two ways: we could retrieve the location after the program be linked or could define the location before the program be linked. I'll show you the both ways anyway, but set the locations before the linking process is useless, you'll understand why. So let's start with this useless method.

Do you remember that exception to the rule of glBindSomething = "take a container"? OK, here is it. To set an attribute location before the program be linked we use a function which starts with glBindSomething, but in reality the OpenGL's Port Crane doesn't take any container at this time. Here the "bind" word is related with the process inside the Program Object, the process of make a connection between an attribute name and a location inside the program. So, the function is:


<table width="675">  
<tr>  
<th>Setting Attribute Location before the linkage</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glBindAttribLocation(GLuint program, GLuint index, const GLchar* name)</strong></h5><br/>  
<ul>  
    <li><strong>program</strong>: The program name/id generated by the <strong>glCreateProgram</strong> function.</li>
    <li><strong>index</strong>: The location we want to set.</li>
    <li><strong>name</strong>: The name of attribute inside the vertex shader.</li>
</ul>  
</td>  
</tr>  
</table>


The above method must be called after you create the Program Object, but before you link it. This is the first reason because I discourage doing this. It's a middle step in the Program Object creation. Obviously you can choose the best way to your application. I prefer the next one.

Now let's see how to get the locations to Attributes and Uniforms after the linking process. Whatever way you choose, you must to hold the location to each shader variable in your application. Because you will need these locations to set its values later on. Here is the functions to use after the linking:


<table width="675">  
<tr>  
<th>Getting Attribute and Uniform Location</th>  
</tr>  
<tr>  
<td><h5><strong>GLint glGetAttribLocation(GLuint program, const GLchar* name)</strong></h5><br/>  
<ul>  
    <li><strong>program</strong>: The program name/id generated by the <strong>glCreateProgram</strong> function.</li>
    <li><strong>name</strong>: The attribute's name inside the vertex shader.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLint glGetUniformLocation(GLuint program, const GLchar* name)</strong></h5><br/>  
<ul>  
    <li><strong>program</strong>: The program name/id generated by the <strong>glCreateProgram</strong> function.</li>
    <li><strong>name</strong>: The uniform's name inside the shaders.</li>
</ul>  
</td>  
</tr>  
</table>


Once we have the locations to our attributes and uniforms, we can do use these locations to set the values we want. OpenGL gives to us 28 different function to set the values of our attributes and uniforms. Those functions are separated in groups which let you define constant values (uniforms or attributes) or dynamic values (attributes only). To use dynamic attributes you need enable them for a while. You could ask what is the difference between the uniforms, which are always constants, and the constants attributes. Well, the answer is: Good question! Just like the culling GL_FRONT_AND_BACK, this is one thing which I can't understand why the OpenGL continue using it. There are no real difference on the performance of uniforms and constant attributes, or on the memory size and these kind of impacts. So my big advice here is: let the attributes to dynamic values only! If you have a constant value, use the uniforms!

Plus, have two things which make the uniforms the best choice to constant values: Uniforms can be used 128 times in the vertex shader but the attributes just 8 times and the other reason is because attributes can't be arrays. I'll explain this fact later on. For now, although by default the OpenGL does use the attributes as constants, they was not made for this purpose, they was made to be dynamic.

Anyway, I'll show how to set the dynamic attributes, uniforms and even the useless constant attributes. Uniforms can be used with any of the data types or even be a structure or an array of any of those. Here are the functions to set the uniforms values:


<table width="675">  
<tr>  
<th>Defining the Uniforms Values</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glUniform{1234}{if}(GLint location, T value[N])</strong></h5><br/>  
<ul>  
    <li><strong>location</strong>: The uniform location retrieved by the <strong>glGetUniformLocation</strong> function.</li>
    <li><strong>value[N]</strong>: The value you want to set based on the last letter of the function name, <em>i</em> = GLint, <em>f</em> = GLfloat. You must repeat this parameter N times, according to the number specified in the function name {1234}.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glUniform{1234}{if}v(GLint location, GLsizei count, const T* value)</strong></h5><br/>  
<ul>  
    <li><strong>location</strong>: The uniform location retrieved by the <strong>glGetUniformLocation</strong> function.</li>
    <li><strong>count</strong>: The length of the array which you are setting. This will be 1 if you want to set only a single uniform. Values greater than 1 means you want to set values to an array.</li>
    <li><strong>value</strong>: A pointer to the data you want to set. If you are setting vector uniforms (vec3, for example), each set of 3 values will represent one vec3 in the shaders. The data type of the values must match with the letter in the function name, <em>i</em> = GLint, <em>f</em> = GLfloat.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glUniformMatrix{234}fv(GLint location, GLsizei count, GLboolean transpose, const GLfloat* value)</strong></h5><br/>  
<ul>  
    <li><strong>location</strong>: The uniform location retrieved by the <strong>glGetUniformLocation</strong> function.</li>
    <li><strong>count</strong>: The number of matrices which you are setting. This will be 1 if you want to set only one mat{234} into the shader. Values greater than 1 means you want to set values to an array of matrices, arrays are defined as mat{234}["<em>count</em>"] in the shaders.</li>
    <li><strong>transpose</strong>: This parameter must be GL_FALSE, it is an internal convention just to compatibility with desktop version.</li>
    <li><strong>value</strong>: A pointer to your data.</li>
</ul>  
</td>  
</tr>  
</table>


Many question, I know... Let me explain it step by step.  
The above table shows exactly 19 OpenGL functions. The notation {1234} means you have to write one of these number in the function name, followed by {if} which means you have to choose one of those letters to write the function name and the final "v" of "fv" means you have to write one of both anyway. The [N] in the parameter means you have to repeat that parameter according to the number {1234} in the function name. Here are the complete list of 19 functions:


<ul>  
    <li><strong>glUniform1i(GLint location, GLint x)</strong></li>
    <li><strong>glUniform1f(GLint location, GLfloat x)</strong></li>
    <li><strong>glUniform2i(GLint location, GLint x, GLint y)</strong></li>
    <li><strong>glUniform2f(GLint location, GLfloat x, GLfloat y)</strong></li>
    <li><strong>glUniform3i(GLint location, GLint x, GLint y, GLint z)</strong></li> 
    <li><strong>glUniform3f(GLint location, GLfloat x, GLfloat y, GLfloat z)</strong></li>
    <li><strong>glUniform4i(GLint location, GLint x, GLint y, GLint z, GLint w)</strong></li>
    <li><strong>glUniform4f(GLint location, GLfloat x, GLfloat y, GLfloat z, GLfloat w)</strong></li>
    <li><strong>glUniform1iv(GLint location, GLsizei count, const GLint* v)</strong></li>
    <li><strong>glUniform1fv(GLint location, GLsizei count, const GLfloat* v)</strong></li>
    <li><strong>glUniform2iv(GLint location, GLsizei count, const GLint* v)</strong></li>
    <li><strong>glUniform2fv(GLint location, GLsizei count, const GLfloat* v)</strong></li>
    <li><strong>glUniform3iv(GLint location, GLsizei count, const GLint* v)</strong></li>
    <li><strong>glUniform3fv(GLint location, GLsizei count, const GLfloat* v)</strong></li>
    <li><strong>glUniform4iv(GLint location, GLsizei count, const GLint* v)</strong></li>
    <li><strong>glUniform4fv(GLint location, GLsizei count, const GLfloat* v)</strong></li>
    <li><strong>glUniformMatrix2fv(GLint location, GLsizei count, GLboolean transpose, const GLfloat* value)</strong></li>
    <li><strong>glUniformMatrix3fv(GLint location, GLsizei count, GLboolean transpose, const GLfloat* value)</strong></li>
    <li><strong>glUniformMatrix4fv(GLint location, GLsizei count, GLboolean transpose, const GLfloat* value)</strong></li>
</ul>


Wow!!!  
By this perspective, can seems so many functions to learn all, but trust me, it's not!  
I prefer look at that table. If I want to set a single uniform which is not of matrix data type I use <strong>glUniform{1234}{if}</strong> according to what I want, 1 = <strong>float/bool/int</strong>, 2 = <strong>vec2/bvec2/ivec2</strong>, 3 = <strong>vec3/bvec3/ivec3</strong> and 4 = <strong>vec4/bvec4/ivec4</strong>. Very simple! If I want to set an array I just place a "v" (of vector) at the end of my last reasoning, so I'll use <strong>glUniform{1234}{if}v</strong>. And finally if what I want is set a matrix data type, being an array or not, I surely will use <strong>glUniformMatrix{234}fv</strong> according to what I want, 2 = <strong>mat2</strong>, 3 = <strong>mat3</strong> and 4 =<strong>mat4</strong>. To define an array you need to remember that the count of your array must be informed to one of above functions by the parameter <strong>count</strong>. Seems more simple now, right?

This is all about how to set an uniform into the shaders. Remember two important things: the same uniform can be used by both shaders, to do this just declare it into both. And the second thing is the most important, the uniforms will be set to the currently program object in use. So you MUST to start using a program before set the uniforms and attributes values to it. To use a program object you remember, right? Just call <strong>glUseProgram</strong> informing the desired name/id.

Now let's see how to set up the values to attributes. Attributes can be used only with the data types <strong>float</strong>, <strong>vec2</strong>, <strong>vec3</strong>, <strong>vec4</strong>, <strong>mat2</strong>, <strong>mat3</strong>, and <strong>mat4</strong>. Attributes cannot be declared as arrays or structures. Following are the functions to define the attributes values.


<table width="675">  
<tr>  
<th>Defining the Attributes Values</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glVertexAttrib{1234}f(GLuint index, GLfloat value[N])</strong></h5><br/>  
<ul>  
    <li><strong>index</strong>: The attribute's location retrieved by the <strong>glGetAttribLocation</strong> function or defined with <strong>glBindAttribLocation</strong>.</li>
    <li><strong>value[N]</strong>: The value you want to set. You must repeat this parameter N times, according to the number specified in the function name {1234}.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glVertexAttrib{1234}fv(GLuint index, const GLfloat* values)</strong></h5><br/>  
<ul>  
    <li><strong>index</strong>: The attribute's location retrieved by the <strong>glGetAttribLocation</strong> function or defined with <strong>glBindAttribLocation</strong>.</li>
    <li><strong>values</strong>: A pointer to an array containing the values you want to set up. Only the necessary elements in the array will be used, for example, in case of setting a vec3 if you inform an array of 4 elements, only the first three elements will be used. If the shader need to make automatically fills, it will use the identity of vec4 (x = 0, y = 0, z = 0, z = 1), for example, in case you setting a vec3 if you inform an array of 2 elements, the third elements will be filled with value 0. To matrices, the auto fill will use the matrix identity.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glVertexAttribPointer(GLuint index, GLint size, GLenum type, GLboolean normalized, GLsizei stride, const GLvoid* ptr)</strong></h5><br/>  
<ul>  
    <li><strong>index</strong>: The attribute's location retrieved by the <strong>glGetAttribLocation</strong> function or defined with <strong>glBindAttribLocation</strong>.</li>
    <li><strong>size</strong>: This is the size of each element. Here the values can be:
<ul>  
    <li><strong>1</strong>: to set up float in shader.</li>
    <li><strong>2</strong>: to set up vec2 in shader.</li>
    <li><strong>3</strong>: to set up vec3 in shader.</li>
    <li><strong>4</strong>: to set up vec4 in shader.</li>
</ul></li>  
    <li><strong>type</strong>: Specify the OpenGL data type used in the informed array. Valid values are:
<ul>  
    <li><strong>GL_BYTE</strong></li>
    <li><strong>GL_UNSIGNED_BYTE</strong></li>
    <li><strong>GL_SHORT</strong></li>
    <li><strong>GL_UNSIGNED_SHORT</strong></li>
    <li><strong>GL_FIXED</strong></li>
    <li><strong>GL_FLOAT</strong></li>
</ul></li>  
    <li><strong>normalized</strong>: If set to true (GL_TRUE) this will normalize the non-floating point data type. The normalize process will place the converted float number in the range 0.0 - 1.0. If this is set to false (GL_FALSE) the non-floating point data type will be converted directly to floating points.</li>
    <li><strong>stride</strong>: It means the interval of elements in the informed array. If this is 0, then the array elements will be used sequentially. If this value is greater than 0, the elements in the array will be used respecting this stride. This values must be in the basic machine units (bytes). </li>
    <li><strong>ptr</strong>: The pointer to an array containing your data.</li>
</ul>  
</td>  
</tr>  
</table>


The above table has the same rules as the uniforms notations. This last table have 9 functions described, which 8 are to set constant values and only one function to set dynamic values. The function to set dynamic values is <strong>glVertexAttribPointer</strong>. Here are the complete list of functions:


<ul>  
    <li><strong>glVertexAttrib1f(GLuint index, GLfloat x)</strong></li>
    <li><strong>glVertexAttrib2f(GLuint index, GLfloat x, GLfloat y)</strong></li>
    <li><strong>glVertexAttrib3f(GLuint index, GLfloat x, GLfloat y, GLfloat z)</strong></li>
    <li><strong>glVertexAttrib4f(GLuint index, GLfloat x, GLfloat y, GLfloat z, GLfloat w)</strong></li>
    <li><strong>glVertexAttrib1fv(GLuint index, const GLfloat* values)</strong></li>
    <li><strong>glVertexAttrib2fv(GLuint index, const GLfloat* values)</strong></li>
    <li><strong>glVertexAttrib3fv(GLuint index, const GLfloat* values)</strong></li>
    <li><strong>glVertexAttrib4fv(GLuint index, const GLfloat* values)</strong></li>
    <li><strong>glVertexAttribPointer(GLuint index, GLint size, GLenum type, GLboolean normalized, GLsizei stride, const GLvoid* ptr)</strong></li>
</ul>


The annoying thing here is the constant value is the default behavior to the shaders, if you want to use dynamic values to attributes you will need to temporarily enable this feature. Dynamic values will be set as per-vertex. You must to use the following functions to enable and disable the dynamic values behavior:


<table width="675">  
<tr>  
<th>Variable Values Feature</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glEnableVertexAttribArray(GLuint index)</strong></h5><br/>  
<ul>  
    <li><strong>index</strong>: The attribute's location retrieved by the <strong>glGetAttribLocation</strong> function or defined with <strong>glBindAttribLocation</strong>.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glDisableVertexAttribArray(GLuint index)</strong></h5><br/>  
<ul>  
    <li><strong>index</strong>: The attribute's location retrieved by the <strong>glGetAttribLocation</strong> function or defined with <strong>glBindAttribLocation</strong>.</li>
</ul>  
</td>  
</tr>  
</table>


So, before using <strong>glVertexAttribPointer</strong> to define per-vertex values to the attributes, you must enable the location of the desired attribute to accept dynamic values by using the <strong> glEnableVertexAttribArray</strong>.

To the pair of VSH and FSH shown early, we could use the following code to setup their values:


<pre class="brush:csharp">  
// Assume which _program defined early in another code example.

GLuint mvpLoc, mapsLoc, vertexLoc, textureLoc;

// Gets the locations to uniforms.
mvpLoc = glGetUniformLocation(_program, "u_mvpMatrix");  
mapsLoc = glGetUniformLocation(_program, "u_maps");

// Gets the locations to attributes.
vertexLoc = glGetAttribLocation(_program, "a_vertex");  
textureLoc = glGetAttribLocation(_program, "a_texture");

// ...
// Later, in the render time...
// ...

// Sets the ModelViewProjection Matrix.
// Assume which "matrix" variable is an array with
// 16 elements defined, matrix[16].
glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, matrix);

// Assume which _texture1 and _texture2 are two texture names/ids.
// The order is very important, first you activate
// the texture unit and then you bind the texture name/id.
glActiveTexture(GL_TEXTURE0);  
glBindTexture(GL_TEXTURE_2D, _texture1);  
glActiveTexture(GL_TEXTURE1);  
glBindTexture(GL_TEXTURE_2D, _texture2);

// The {0,1} correspond to the activated textures units.
int textureUnits[2] = {0,1};

// Sets the texture units to an uniform.
glUniform1iv(mapsLoc, 2, &textureUnits);

// Enables the following attributes to use dynamic values.
glEnableVertexAttribArray(vertexLoc);  
glEnableVertexAttribArray(textureLoc);

// Assume which "vertexArray" variable is an array of vertices
// composed by several sequences of 3 elements (X,Y,Z)
// Something like {0.0,0.0,0.0, 1.0,2.0,1.0, -1.0,-2.0,-1.0, ...}
glVertexAttribPointer(vertexLoc, 3, GL_FLOAT, GL_FALSE, 0, vertexArray);

// Assume which "textureArray" is an array of texture coordinates
// composed by several sequences of 2 elements (S,T)
// Something like {0.0,0.0, 0.3,0.2,  0.6, 0.3,  0.3,0.7, ...}
glVertexAttribPointer(textureLoc, 2, GL_FLOAT, GL_FALSE, 0, textureArray);

// Assume which "indexArray" is an array of indices
// Something like {1,2,3,  1,3,4,  3,4,5,  3,5,6,  ...}
glDrawElements(GL_TRIANGLES, 64, GL_UNSIGNED_SHORT, indexArray);

// Disables the vertices attributes.
glDisableVertexAttribArray(vertexLoc);  
glDisableVertexAttribArray(textureLoc);  
</pre>


I had enabled and disabled the dynamic values to attributes just to show you how to do. As I said before, enable and disable features in OpenGL are expansive tasks, so you could want enable the dynamic values to the attributes once, maybe in time you get its location, for example. I prefer enable them once.

<br/><a name="using_buffer_objects"></a>  
<h3>Using the Buffer Objects</h3><a href="#list_contents">top</a>  
To use the buffer objects is very simple! All that you need is bind the buffer objects again. Do you remember the buffer object hook is a double one? So you can bind a <strong>GL_ARRAY_BUFFER</strong> and a <strong>GL_ELEMENT_ARRAY_BUFFER</strong> at the same time. Then you call the <strong>glDraw*</strong> informing the starting index of the buffer object which you want to initiate. You'll need to inform the start index instead of an array data, so the start number must be a pointer to void. The start index must be in the basic machine units (bytes).

To the above code of attributes and uniforms, you could make something like this:


<pre class="brush:csharp">  
GLuint arrayBuffer, indicesBuffer;

// Generates the name/ids to the buffers
glGenBuffers(1, &arrayBuffer);  
glGenBuffers(1, &indicesBuffer);

// Assume we are using the best practice to store all informations about
// the object into a single array: vertices and texture coordinates.
// So we would have an array of {x,y,z,s,t,  x,y,z,s,t,  ...}
// This will be our "arrayBuffer" variable.
// To the "indicesBuffer" variable we use a
// simple array {1,2,3,  1,3,4,  ...}

// ...
// Proceed with the retrieving attributes and uniforms locations.
// ...

// ...
// Later, in the render time...
// ...

// ...
// Uniforms definitions
// ...

glBindBuffer(GL_ARRAY_BUFFER, arrayBuffer);  
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indicesBuffer);

int fsize = sizeof(float);  
GLsizei str = 5 * fsize;  
void * void0 = (void *) 0;  
void * void3 = (void *) 3 * fsize;

glVertexAttribPointer(vertexLoc, 3, GL_FLOAT, GL_FALSE, str, void0);  
glVertexAttribPointer(textureLoc, 2, GL_FLOAT, GL_FALSE, str, void3);

glDrawElements(GL_TRIANGLES, 64, GL_UNSIGNED_SHORT, void0);  
</pre>


If you are using an OOP language you could create elegant structures with the concepts of buffer objects and attributes/uniforms.

OK, those are the basic concepts and instructions about the shaders and program objects. Now let's go to the last part (finally)! Let's see how to conclude the render using the EGL API.

<br/><a name="rendering"></a>  
<h2><strong>Rendering</strong></h2><a href="#list_contents">top</a>  
I'll show the basic kind of render, a render to the device's screen. As you noticed before in this serie of tutorials, you could render to an off-screen surfaces like a frame buffer or a texture and then save it to a file, or create an image in the device's screen, whatever you want.

<br/><a name="pre-render"></a>  
<h3>Pre-Render</h3><a href="#list_contents">top</a>  
I like to think in the rendering as two steps. The first is the Pre-Render, in this step you need to clean any vestige from the last render. This is important because exists conservation in the frame buffers. You remember what is a frame buffer, right? A colletion of images from render buffers. So when you make a complete render, the images in render buffer still alive even after the final image have been presented to its desired surface. What the Pre-Render step does is just clean up all the render buffers. Unless you want, for some reason, reuse the previous image into the render buffers.

Make the clean up in the frame buffer is very simple. This is the function you will use:


<table width="675">  
<tr>  
<th>Clearing the Render Buffers</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glClear(GLbitfield mask)</strong></h5><br/>  
<ul>  
    <li><strong>mask</strong>: The mask represent the buffers you want to clean. This parameter can be:
<ul>  
    <li><strong>GL_COLOR_BUFFER_BIT</strong>: To clean the Color Render Buffer.</li>
    <li><strong>GL_DEPTH_BUFFER_BIT</strong>: To clean the Depth Render Buffer.</li>
    <li><strong>GL_STENCIL_BUFFER_BIT</strong>: To clean the Stencil Render Buffer.</li>
</ul></li>  
</ul>  
</td>  
</tr>  
</table>


As now you know well, every instruction related to one of the Port Crane Hooks will affect the last object bound. So before call the above function, make sure you have bound the desired frame buffer. You can clean many buffers at once, as the mask parameter is bit informations, you can use the bitwise operator OR "|". Something like this:


<pre class="brush:csharp">  
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);  
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);  
</pre>


OpenGL also gives to us others functions to make the clean up. But the above function is pretty good for any cases.  
The Pre-Render step should be called before any <strong>glDraw*</strong> calls. Once the render buffer is clean, it's time to draw your 3D objects. The next step is the drawing phase, but it is not one of the two render steps I told you before, it is just the drawing.

<br/><a name="drawing"></a>  
<h3>Drawing</h3><a href="#list_contents">top</a>  
I've showed it several times before in this tutorial, but now is time to drill deep inside it. The triggers to drawing in OpenGL is composed by two functions:


<table width="675">  
<tr>  
<th>Clearing the Render Buffers</th>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glDrawArrays(GLenum mode, GLint first, GLsizei count)</strong></h5><br/>  
<ul>  
    <li><strong>mode</strong>: This parameter specify which primitive will be rendered and how its structure is organized. This parameter can be:
<ul>  
    <li><strong>GL_POINTS</strong>: Draw points. Points are composed by single sequences of 3 values (x,y,z).</li>
    <li><strong>GL_LINES</strong>: Draw Lines. Lines are composed by two sequences of 3 values (x,y,z / x,y,z).</li>
    <li><strong>GL_LINE_STRIP</strong>: Draw Lines forming a strip. Lines are composed by two sequences of 3 values (x,y,z / x,y,z).</li>
    <li><strong>GL_LINE_LOOP</strong>: Draw Lines closing a loop of them. Lines are composed by two sequences of 3 values (x,y,z / x,y,z).</li>
    <li><strong>GL_TRIANGLES</strong>: Draw Triangles. Triangles are composed by three sequences of 3 values (x,y,z / x,y,z / x,y,z).</li>
    <li><strong>GL_TRIANGLE_STRIP</strong>: Draw Triangles forming a strip. Triangles are composed by three sequences of 3 values (x,y,z / x,y,z / x,y,z).</li>
    <li><strong>GL_TRIANGLE_FAN</strong>: Draw Triangles forming a fan. Triangles are composed by three sequences of 3 values (x,y,z / x,y,z / x,y,z).</li>
</ul></li>  
    <li><strong>first</strong>: Specifies the starting index in the enabled vertex arrays.</li>
    <li><strong>count</strong>: Represents number of vertices to be draw. This is very important, it represents the number of vertices elements, not the number of the elements in the array of vertices, take care to don't confuse both. For example, if you are drawing a single triangle this should be 3, because a triangle is formed by 3 vertices. But if you are drawing a square (composed by two triangles) this should be 6, because is formed by two sequences of 3 vertices, a total of 6 elements, and so on.</li>
</ul>  
</td>  
</tr>  
<tr>  
<td><h5><strong>GLvoid glDrawElements(GLenum mode, GLsizei count, GLenum type, const GLvoid* indices)</strong></h5><br/>  
<ul>  
    <li><strong>mode</strong>: This parameter specify which primitive will be rendered and how its structure is organized. This parameter can be:
<ul>  
    <li><strong>GL_POINTS</strong>: Draw points. Points are composed by single sequences of 3 values (x,y,z).</li>
    <li><strong>GL_LINES</strong>: Draw Lines. Lines are composed by two sequences of 3 values (x,y,z / x,y,z).</li>
    <li><strong>GL_LINE_STRIP</strong>: Draw Lines forming a strip. Lines are composed by two sequences of 3 values (x,y,z / x,y,z).</li>
    <li><strong>GL_LINE_LOOP</strong>: Draw Lines closing a loop of them. Lines are composed by two sequences of 3 values (x,y,z / x,y,z).</li>
    <li><strong>GL_TRIANGLES</strong>: Draw Triangles. Triangles are composed by three sequences of 3 values (x,y,z / x,y,z / x,y,z).</li>
    <li><strong>GL_TRIANGLE_STRIP</strong>: Draw Triangles forming a strip. Triangles are composed by three sequences of 3 values (x,y,z / x,y,z / x,y,z).</li>
    <li><strong>GL_TRIANGLE_FAN</strong>: Draw Triangles forming a fan. Triangles are composed by three sequences of 3 values (x,y,z / x,y,z / x,y,z).</li>
</ul></li>  
    <li><strong>count</strong>: Represents number of vertices to be draw. This is very important, it represents the number of vertices elements, not the number of the elements in the array of vertices, take care to don't confuse both. For example, if you are drawing a single triangle this should be 3, because a triangle is formed by 3 vertices. But if you are drawing a square (composed by two triangles) this should be 6, because is formed by two sequences of 3 vertices, a total of 6 elements, and so on.</li>
    <li><strong>type</strong>: Represent the OpenGL's data type which is used in the array of indices. This parameter can be:
<ul>  
    <li><strong>GL_UNSIGNED_BYTE</strong>: To indicate a GLubyte.</li>
    <li><strong>GL_UNSIGNED_SHORT</strong>: To indicate a GLushort.</li>
</ul></li>  
</ul>  
</td>  
</tr>  
</table>


Many questions, I know. First let me introduce how these functions work. One of the most important things in the programmable pipeline is defined here, the number of times which the VSH will be executed! This is done by the parameter <em>count</em>. So if you specify 128 to it, the currently program in use will process its VSH 128 times. Of course, the GPU will optimize this process so much as possible, but in general words, your VSH will be processed 128 times to process all your defined attributes and uniforms. And why I said to take care about the difference between number of vertices elements and number of the elements in the array of vertices? Is simple, you could have an array of vertices with 200 elements, but for some reason you want to construct just a triangle at this time, so <em>count</em> will be 3, instead 200. This is even more usefull if you are using an array of indices. You could have 8 elements in the array of vertices, but the array of indices specifies 24 elements, in this case the parameter <em>count</em> will be 24. In general words, it's the number of vertices elements you want to draw.

If you are using the <strong>glDrawArrays</strong> the <em>count</em> parameter works like an initial stride to your per-vertex attributes. So if you set it to 2, for example, the values in the vertex shader will start by the index 2 in the array you specified in <strong>glVertexAttribPointer</strong>, instead start by 0 as default.

If you are using the <strong>glDrawElements</strong> the <em>first</em> parameter will work like an initial stride to the array of indices, and not directly to your per-vertex values. The <em>type</em> identifies the data type, in really it's an optimization hint. If your array of indices has less than 255 elements is a very good idea to use <strong>GL_UNSIGNED_BYTE</strong>. Some implementations of OpenGL also support a third data type: <strong>GL_UNSIGNED_INT</strong>, but this is not very common.

OK, now let's talk about the construction modes, defined in the <strong>mode</strong> parameter. It's a hint to use in the construction of your meshes. But the modes are not so useful to all kind of meshes. The following images could help you to understand:

<img src='http://db-in.com/images/primitives_lines_strip_example.jpg'  alt="Line construction modes." title="primitives_lines_strip_example" width="600" height="400" class="size-full wp-image-1242" />

The image above shows what happens when we draw using one of the line drawing modes. All the drawing in the image above was made with the sequence of {v0,v1,v2,v3,v4,v5} as the array of vertices, supposing which each vertex has an unique values to x,y,z coordinates. As I told before, the unique mode which is compatible with any kind of draw is the GL_LINES, the others modes is for optimization in some specific situations. Optimize? Yes, look, using the GL_LINES the number of drawn lines was 3, using GL_LINE_STRIP the number of drawn lines was 5 and with GL_LINE_LOOP was 6, always using the same array of vertices and the same number of VSH loops.

Now the drawing modes to triangles are similar, look:

<img src='http://db-in.com/images/primitives_strip_example.jpg'  alt="Triangle construction mode." title="primitives_strip_example" width="600" height="400" class="size-full wp-image-1246" />

The image above shows what happens when we draw using one of the triangles drawing modes. All the indicated drawing in the image was made with the sequence of {v0,v1,v2,v3,v4,v5} as the array of vertices, supposing which each vertex has an unique values to x,y,z coordinates. Here again, the same thing, just the basic <strong>GL_TRIANGLES</strong> is useful to any kind of mesh, the others modes is for optimization in some specific situations. Using the <strong>GL_TRIANGLES_STRIP</strong> we need to reuse the last formed line, in the above example we must to draw using an array of index like {0,1,2,  0,2,3,  3,2,4,...}. Using the <strong>GL_TRIANGLES_FAN</strong> we must to always return to the first vertex, in the image we could use {0,1,2,  0,2,3,  0,3,4,...} as our array of indices.

My advice here is to use <strong>GL_TRIANGLES</strong> and <strong>GL_LINES</strong> as much as possible. The optimization gain of <strong>STRIP</strong>, <strong>LOOP</strong> and <strong>FAN</strong> could be achieved by optimizing the OpenGL draw in other areas, with other techniques, like reducing the number of polygons into your meshes or optimizing your shaders processing.

<br/><a name="render"></a>  
<h3>Render</h3><a href="#list_contents">top</a>  
This last step is just to present the final result of the frame buffer to the screen, even if you are not explicitely using a frame buffer. I've explained this on my EGL article. If you lost it, <a href='http://blog.db-in.com/khronos-egl-and-apple-eagl/' #rendering" target="_blank">click here to egl</a> or if you are using Objective-C <a href='http://blog.db-in.com/khronos-egl-and-apple-eagl/' #renderingeagl" target="_blank">click here to eagl</a>.

So I will not repeat the same content here. But I want to remember for who of you which are working on EAGL API, before to call the <strong>presentRenderbuffer:GL_RENDERBUFFER</strong>, you must to bind the color render buffer and, obviously, you need to bind the frame buffer too. This is because the render buffer stay "inside" the frame buffer, you remember it, right? The final code will be something like this:


<pre class="brush:csharp">  
- (void) makeRender
{
    glBindFramebuffer(_framebuffer);
    glBindRenderbuffer(_colorRenderbuffer);
    [_context presentRenderbuffer:GL_RENDERBUFFER];
}
</pre>



For who of you which are using the EGL API, the process is just swap the internal buffers, using the <strong>glSwapBufers</strong> function. At my EGL article I explain that too.

OK guys! This is the basic about the render. OpenGL also provides something called Multisample, it's a special kind of render to produce anti-aliased images. But this is an advanced discussion. I'll let it to the next part.

This tutorial is already long enough, so let's go to the conclusion and our final revision.

<br><a name="conclusion"></a>  
<h2><strong>Conclusion</strong></h2><a href="#list_contents">top</a>  
Here we are! I don't know how many hours you spent reading this, but I have to admit, it's a big tiring reading. So I want thank you, thank you for reading. Now, as we are used, let's remember from everything:


<ul>  
    <li>First we saw the OpenGL's data types and the programmable pipeline.</li>
    <li>Your meshes (primitives) data must be an array of informations. It should be optimized to form an Array of Structures.</li>
    <li>The OpenGL works as a Port Crane which has multiple arms with hooks. Four great hooks: Frame Buffer Hook, Render Buffer Hook, Buffer Object Hook and Texture Hook.</li>
    <li>The Frame Buffers holds 3 Render Buffers: Color, Depth and Stencil. They form an image coming from the OpenGL's render.</li>
    <li>Textures must to have specific formats before be uploaded to OpenGL (a specific pixel order and color bytes per pixel). Once an OpenGL's texture was created, you need to activate a Texture Unit which will be processed in the shaders.</li>
    <li>The Rasterize is a big process which includes several tests and per-fragment operations.</li>
    <li>Shaders are used in pairs and must be inside a Program Object. Shaders use their own language called GLSL or GLSL ES.</li>
    <li>You can define dynamic values only in Attributes into VSH (per-vertex values). The Uniforms are always constant and can be used in both shaders kind.</li>
    <li>You need to clean the buffers before start to draw new things to it.</li>
    <li>You will call a <strong>glDraw*</strong> function to each 3D object you want to render.</li>
    <li>The final render step is made using EGL API (or EAGL API, in iOS cases).</li>
</ul>


My last advice is to you check again the most important points, if you have some doubt, just ask, let a comment bellow and if I could help I'll be glad.

<br/>  
<h2><strong>On the Next</strong></h2>  

<p>All I showed here is an intermediate level about OpenGL. Using these knowledge you can create great 3D applications. I think this is around the half of the OpenGL study. In the next part, an advanced tutorial, I'll talk about everything I skipped here: the 3D textures, multisampling, render to off-screen surfaces, per-fragment operations in deeply and some best practices I've learned developing my 3D engine.</p>

<p>Probably I'll make another two articles before the third part of this serie. One about the textures, the complexity of binary images and compressed formats like PVR textures. And another one about the ModelViewProjection Matrix, Quaternions and matrices operations, it's more a mathematical article, for who likes, will appreciate.</p>

<p>Thanks again for reading and see you in the next part!</p>

<p><strong>NEXT PART:</strong> <a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-3'  target="_blank">Part 3 - Jedi skills in OpenGL ES 2.0 and 2D graphics (Advanced)</a></p>

<iframe scrolling="no" src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  width="100%" height="130px"></iframe>]]></description><link>http://blog.db-in.com/all-about-opengl-es-2-x-part-2/</link><guid isPermaLink="false">78198f3d-8811-416a-a0f7-c5aeb5dcb8df</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Tue, 04 Feb 2014 01:45:47 GMT</pubDate></item><item><title><![CDATA[All about OpenGL ES 2.x - (part 1&#x2F;3)]]></title><description><![CDATA[<p><img src='http://db-in.com/images/opengl_part1.png'  alt="" title="opengl_part1" width="300" height="283" class="alignright size-medium wp-image-812" /> <br />
Hello everyone!</p>

<p>Welcome again to a new serie of tutorial. This time let's talk about the magic of 3D world. Let's talk about OpenGL. I dedicated the last five months of my life entirely to go deep inside 3D world, I'm finishing my new 3D engine (seems one of the greatest work that I've done) and now is time to share with you what I know, all references, all books, tutorials, everything and, of course, learn more with your feedback.</p>

<p>This serie is composed by 3 parts:  </p>

<ul>  
    <li>Part 1 - Basic concepts of 3D world and OpenGL (Beginners)</li>
    <li><a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-2'  target="_blank">Part 2 - OpenGL ES 2.0 in-depth (Intermediate)</a></li>
    <li><a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-3'  target="_blank">Part 3 - Jedi skills in OpenGL ES 2.0 and 2D graphics (Advanced)</a></li>
</ul>  

<!--more-->  

<p>So if you are interested in some code, just jump to part 2/3, because in this first one I'll just talk about the concepts, nothing more.</p>

<p>OK, let's start.</p>

<p><br/>  </p>

<h2><strong>At a glance</strong></h2>  

<p>Who of you never hear about OpenGL? OpenGL means Open Graphics Library and it is used a lot today in computer languages. OpenGL is the closest point between the CPU (which we, developers, run our applications based on a language) and the GPU (the graphic's processor that exist in every Graphics Cards). So OpenGL need to be supported by the vendors of Graphics Cards (like NVidia) and be implemented by the OS's vendors (like Apple in his MacOS and iOS) and finally the OpenGL give to us, developers, an unified API to work with. This API is "Language Free" (or almost free). This is amazing, because if you use C, or C++, or Objective-C, or Perl, or C#, or JavaScript, wherever you use, the API always will be equivalent, will present the same behavior, the same functions, the same commands and is at this point that comes this tutorial serie! To deal with OpenGL's API to the developers.</p>

<p>Before start talking about OpenGL API we need to have a good knowledge in 3D world. The history of 3D <br />
world in computer language is bound to OpenGL history. So let's take a little look at a bit of history.</p>

<p><br/>  </p>

<h2><strong>A short story</strong></h2>  

<p><img class="alignleft size-full wp-image-689" title="opengl_logo" src='http://db-in.com/images/opengl-logo.jpg'  alt="OpenGL logo" width="204" height="75" /><img class="alignleft size-full wp-image-690" title="opengl-es-logo" src='http://db-in.com/images/opengl-es-logo.jpg'  alt="OpenGL ES logo" width="204" height="75" /> <br />
<br/><br/><br/><br/> <br />
About 20 years ago has a guy called Silicon Graphics (SGI) made a little kind of device. That device was able to show an illusion of reality. In a world of 2D images, that device dared to show 3D images, that simulate the perpective and depth of the human eye. That device was called IrisGL (probaly because it tries to simulate the eye's iris).</p>

<p>Well, serious, that device was the first great Graphics Library. But it died fast, because to do what he did, he needed to control many things in computers like the graphical card, the windowing system, the basic language and even the front end. It's too much to just one company manages. So the SGI started to delegate some things like "create graphics cards", "manage the windowing", "make the front end" to other companies and get focus on the most important part of their Graphical Library. In 1992 was launch the first OpenGL.</p>

<p>In 1995 Microsoft released the Direct3D, the main competitor of OpenGL. <br />
And only in 1997 OpenGL 1.1 was released. But OpenGL becomes really interesting to me only in 2004, which the  OpenGL 2.0 was released with a great change. The Shaders, the programmable pipeline. I love it! <br />
And finally in 2007 we meet the OpenGL ES 2.0 wich bring to us the power of Shader and programmable pipeline to the Embedded Systems.</p>

<p>Today you can the see the OpenGL's logo (or OpenGL ES) in many games, 3D applications, 2D applications and a lot of graphical softwares (specially in 3D softwares). OpenGL ES is used by PlayStation, Android, Nintendo 3DS, Nokia, Samsung, Symbian and of course by Apple with MacOS and iOS.</p>

<p><br/>  </p>

<h2><strong>OpenGL's Greatest Rival</strong></h2>  

<p>Talking about Microsoft Windows OS (Brugh!!) <br />
OK, do you remember I said that the first launch of  OpenGL was in 1992? So at that time, Microsoft has their shiny Windows 3.1. Well, as Microsoft always believe that "Nothing is created, everything is copied", Microsoft tries to copy OpenGL in what they called DirectX and introduced it in 1995 on Windows 95.</p>

<p>One year later, in 1996, Microsoft introduced Direct3D which is a literal copy of OpenGL. The point is that Microsoft dominate the info market for years, and DirectX or (Direct3D) penetrated and grabbed like a plague in many computers (PCs) and when Microsoft started to deliver their OS to the mobiles and video games, DirectX goes together.</p>

<p>Today DirectX is very similar in structure to OpenGL: has a Shader Language, has a programmable pipeline, has fixed pipeline too, even the names of the function in the API are similar. The difference is that OpenGL is OPEN, but DirectX is closed. OpenGL is for iOS, MacOS and Linux Systems while the DirectX is just for Microsoft OS.</p>

<p>Great, now let's start our journey into 3D world!</p>

<p><br/>  </p>

<h2><strong>3D world</strong></h2>  

<h3>First Point - The Eye</h3>  

<p>Since I can remember, I was passionate by 3D world and 3D games. All that we, humans, know about simulate the real world in a 3D illusion comes from just one single place: our eye.</p>

<p>The eye is the base of everything in 3D world. All that we do is to simulate the power, the beauty and the magic of the human's eye. I'm not a doctor (despite being the son of two) and don't want to talk about the eye in this tutorial, but will be good if you're used to some concepts like: field of view, binocular and monocular vision, eye's lens, concave and convex lens, and this kind of things, this can help you to understand some concepts.</p>

<p>Everything we do in 3D world is to recreate the sensations from the human eye: the perpectives, the vanish points, the distortions, the depth of field, the focus, the field of view, in resume, everything is to simulate that sensations. <br />
<br/>  </p>

<h3>Second Point - The Third Dimension</h3>  

<p>This can seems stupid, but it's necessary to say: 3D world is 3D because has 3 dimensions. "WTF! This is so obviously!", calm down, dude, I'm speaking this because is important to say that an addition of one single dimension (compared to 2D world) led us into serious troubles. That does not create 1 or 3 little problems, but drive us into a pool of troubles.</p>

<p>Look, in 2D world when we need to rotate a square is very simple, 45º degrees always will led our square to a specific rotation, but in 3D world, to rotate a simple square requires X, Y and Z rotation. Depending on which order we rotate first, the final result can be completely different. The things becomes worse when we make consecutive rotations, for example, rotate x = 25 and Y = 20 is one thing, but rotate x = 10, y = 20 and then x = 10 again is a completely new result.</p>

<p>Well, the point here is to say that an addition of one another dimension makes our work be stupidly multiplied. <br />
<br/>  </p>

<h3>Third Point - It's not 3D... it's often 4D.</h3>  

<p>WTF! Another dimension? <br />
Yes dude, the last point that I need to say is it. Often we don't work just in a 3D world, we have a fourth dimension: the time. The things in 3D world need to interact, need to move, need to accelerate, need to collide itself, need to change their inertias. And as I told before, make consecutive changes in 3D world can drive us into multiple results.</p>

<p>OK, until now we have a phrase to define the 3D world: "Is the simulation of the human eye and everything moves.".</p>

<p><br/>  </p>

<h2><strong>OpenGL into the 3D world</strong></h2>  

<p>Now this tutorial starts to be a little more fun. Let's talk about the great engine OpenGL. First we need to thank to great mathematicians like  Leonhard Euler, William Rowan Hamilton, Pythagoras and so many others. Thanks to them, today we have so many formulas and techniques to work with 3D space. OpenGL used all this knowledge to construct a 3D world right on our face. Are thousands, maybe millions of operations per second using a lot of formulas to simulate the beauty of the human eyes.</p>

<p>OpenGL is a great STATE MACHINE (this means that entire OpenGL works with the State Design Pattern). To illustrate what OpenGL is, let's imagine a great Port Crane in some port. There are so many containers with a lot of crates inside. OpenGL is like the whole port, which:  </p>

<ul>  
<li>The containers are the OpenGL's objects. (Textures, Shaders, Meshes and this kind of stuff)</li>  
<li>The crates inside each containers is what we created in our applications using the OpenGL. Are our instances.</li>  
<li>The port crane is the OpenGL API, which we have access.</li>  
 </ul>

So when we execute an OpenGL's function is like give an order to the Crane. The Crane takes the container in the port, raises it, hold it for a while, process what you want inside that container and finally brings the container down again and drop it at some place in the port.

You don't have access directly to the port, you can't see or change the containers's contents, you can't reorganize it, you can't do anything directly to the containers in the port. All that you can is give instructions to the Crane. The Crane is the only that can manage the containers in the port. Remember this! Is the most important information about OpenGL until here. The Crane is the only one that can manage the containers in the port. 

<img src='http://db-in.com/images/opengl_port_crane_example.jpg'  alt="OpenGL Port Crane example" title="opengl_port_crane_example" width="600" height="450" class="size-full wp-image-727" />

Well, OpenGL seems a very limited API at this way, but not is. The OpenGL's Crane is a very very powerful one. It can repeat the process of hold and drop the containers thousand or millions of times per a single second. Another great advantage of OpenGL uses a State Machine pattern is we don't have to hold any instance, we don't need to create any object directly, we just need to hold on the ids, or in the illustration's words, we just need to know the container's identification.

<br/>  
<h2><strong>How OpenGL works</strong></h2>  
Deeply into the OpenGL's core the calculations are done directly in the GPU using the hardware acceleration to floating points. Hugh?

CPU (Central Processing Unit) is the Processor of a computer or device. GPU (Graphics Processing Unit) is the graphics card of a computer or device. The Graphics Card comes to relieve the Processor's life because it can make a lot of computations to deal with images before present the content to the screen.

So in deeply, what OpenGL does is let all the mass computations to the GPU, instead to calculate all in the CPU. The GPU is much much faster to deal with floating point numbers than the CPU. This is the fundamental reason to a 3D game run faster with better Graphics Cards. This is even the reason because 3D professional softwares give to you an option to work with "Software's Render" (CPU processing) or "Graphics Card's Render" (GPU processing). Some softwares also give you an option of "OpenGL", well, now you know. That option is GPU processing!

So, is OpenGL working entirely in the GPU?

Not quite.  
Just a hard image processing and few other things. OpenGL gives to us a lot of features to store images, datas and informations in an optimized format. These optimized data will be processed later directly by the GPU.

So, is OpenGL hardware dependent?

Unfortunately yes! If the hardware (Graphics Card) doesn't support the OpenGL, we can't use it.  
New OpenGL's version often needs new GPU features.  
This is something to know, but not to worry. As OpenGL always needs a vendor's implementations, we (developers) will work on new OpenGL's version just when the devices was prepared to it.

In practice, all Graphics Card Chips today has an implementation of OpenGL. So you can use OpenGL in many languages and devices. Even in the Microsoft Windows. (Brugh)

<br/>  
<h2><strong>OpenGL's Logic</strong></h2>  
OpenGL is a Graphics Library very concise and focused.  
What you see in professional 3D softwares is a super ultra complex work above the OpenGL. Because in deeply, OpenGL's logic is aware about some few things:  
<ul>  
    <li>Primitives</li>
    <li>Buffers</li>
    <li>Rasterize</li>
</ul>

<p>Just it? 3 little things? <br />
Believe, OpenGL works around these 3 concepts. Let's see each concept independently and how the three can join to create the most advanced 3D Graphics Library (also you can use OpenGL to 2D graphics. 2D images to OpenGL is just a 3D working all in the Z depth 0, we'll talk about later on).</p>

<p><br/>  </p>

<h3>Primitives</h3>  

<p>OpenGL's primitives is limited to 3 little kind of objects:  </p>

<ul>  
    <li>About a 3D Point in space (x,y,z)</li>
    <li>About a 3D Line in space (composed by two 3D Points)</li>
    <li>About a 3D Triangle in space (composed by three 3D Points)</li>
</ul>

<p>A 3D Point can be used as a particle in the space. <br />
A 3D Line is always a single line and can be used as a 3D vector. <br />
A 3D Triangle could be one face of a mesh which has thousands, maybe millions faces. <br />
Some OpenGL's versions also support quads (quadrangles), which is merely an offshoot of triangles. But as OpenGL ES was made to achieve the maximum performance, quads are not supported.</p>

<p><br/><a name="Buffers"></a>  </p>

<h3>Buffers</h3>  

<p>Now let's talk about the buffers. In simple words, buffer is a temporary optimized storage. Storage for what? For a lot of stuffs. <br />
OpenGL works with 3 kind of buffers:  </p>

<ul>  
    <li><strong>Frame Buffers</strong></li>
    <li><strong>Render Buffers</strong></li>
    <li><strong>Buffer Object</strong>s</li>
</ul>

<p><strong>Frame Buffers</strong> is the most abstract of the three. When you make an OpenGL's render you can send the final image directly to the device's screen or to a Frame Buffer. So Frame Buffer is a temporary image data, right? <br />
Not exactly. You can image it as an output from an OpenGL's render and this can means a set of images, not just one. What kind of images? Images about the 3D objects, about the depth of objects in space, the intersection of objects and about the visible part of objects. So the Frame Buffer is like a collection of images. All of these stored as a binary array of pixel's information.</p>

<p><strong>Render Buffer</strong> is a temporary storage of one single image. Now you can see more clearly that a Frame Buffer is a collection of Render Buffers. Exist few kinds of Render Buffer: Color, Depth and Stencil.  </p>

<ul>  
    <li>Color Render Buffer stores the final colored image generated by OpenGL's render. Color Render Buffer is a colored (RGB) image.</li>
    <li>Depth Render Buffer stores the final Z depth information of the objects. If you are familiar to the 3D softwares, you know what is a Z depth image. It's a grey scale image about the Z position of the objects in 3D space, in which the full white represent most near visible object and black represent the most far object (the full black is invisible)</li>
    <li>Stencil Render Buffer is aware about the visible part of the object. Like a mask of the visible parts. Stencil Render Buffer is a black and white image.</li>
</ul>

<p><a name="buffer_objects"></a> <br />
<strong>Buffer Objects</strong> is a storage which OpenGL calls "server-side" (or server’s address space). The Buffer Objects is also a temporary storage, but not so temporary like the others. A Buffer Object can persist throughout the application execution. Buffer Objects can hold informations about your 3D objects in an optimized format. These information can be of two type: Structures or Indices.</p>

<p>Structures is the array which describe your 3D object, like an array of vertices, an array of texture coordinates or an array of whatever you want. The Indices are more specifics. The array of indices is to be used to indicate how the faces of your mesh will be constructed based on an array of structures.</p>

<p>Seems confused?</p>

<p>OK, let's see an example. <br />
Think about a 3D cube. This cube has 6 faces composed by 8 vertices, right?</p>

<p><img src='http://db-in.com/images/cube_example.gif'  alt="3D cube made with OpenGL" title="cube_example" width="600" height="549" class="size-full wp-image-773" /></p>

<p>Each of these 6 faces are quads, but do you remember that OpenGL just knows about triangles? So we need to transform that quads into triangles to work with OpenGL. When we do this, the 6 faces become 12 faces! <br />
The above image was made with Modo, look at the down right corner. That are informations given by Modo about this mesh. As you can see, 8 vertices and 12 faces (GL: 12). <br />
Now, let's think.</p>

<p>Triangles in OpenGL is a combination of three 3D vertices. So to construct the cube's front face we need to instruct OpenGL at this way: {vertex 1, vertex2, vertex 3}, {vertex 1, vertex 3, vertex 4}. Right?</p>

<p>In other words, we need to repeat the 2 vertices at each cube's face. This could be worst if our mesh has pentangle we need to repeat 4 vertices informations, if was an hexangle we need to repeat 6 vertices informations, a septangle, 8 vertices informations and so on. <br />
This is much much expansive.</p>

<p>So OpenGL give us a way to do that more easily. Called Array of Indices! <br />
In the above cube's example, we could has an array of the 8 vertices: {vertex 1, vertex 2, vertex 3, vertex 4, ...} and instead to rewrite these informations at each cube's faces, we construct an array of indices: {0,1,2,0,2,3,2,6,3,2,5,6...}. Each combination of 3 elements in this array  of indices (0,1,2 - 0,2,3 - 6,3,2) represent a triangle face. With this feature we can write vertex's information once and reuse it many times in the array of indices.</p>

<p>Now, returning to the Buffer Objects, the first kind is an array of structures, as {vertex 1, vertex 2, vertex 3, vertex 4, ...} and the second kind is an array of indices, as {0,1,2,0,2,3,2,6,3,2,5,6...}.</p>

<p>The great advantages of Buffer Objects is they are optimized to work directly in the GPU processing and you don't need hold the array in your application any more after create a Buffer Object.</p>

<p><br/>  </p>

<h3>Rasterize</h3>  

<p>The rasterize is the process by which OpenGL takes all informations about 3D objects (all that coordinates, vertices, maths, etc) to create a 2D image. This image will suffer some changes and then it will be presented on the device's screen (commonly).</p>

<p>But this last step, the bridge between pixel informations and the device's screen, it's a vendor's responsibility. The Khronos group provide another API called EGL, but here the vendors can interfere. We, developers, don't work directly with Khronos EGL, but with the vendor's modified version.</p>

<p>So, when you make an OpenGL render you can choose to render directly to the screen, using the vendor's EGL implementation, or render to a Frame Buffer. Rendering to a Frame Buffer, you still in the OpenGL API, but the content will not be showed onto device's screen yet. Rendering directly to the device's screen, you go out the OpenGL API and enter in the EGL API. So at the render time, you can choose one of both outputs.</p>

<p>But don't worry about this now, as I said, each vendor make their own implementation of EGL API. The Apple, for example, doesn't let you render directly to the device's screen, you always need to render to a Frame Buffer and then use EGL's implementation by Apple to present the content on the device's screen.</p>

<p><br/>  </p>

<h2><strong>OpenGL's pipelines</strong></h2>  

<p>I said before about "programmable pipeline" and "fixed pipeline". But what the hell is a programmable pipeline? In simple words?</p>

<p>The programmable pipeline is the Graphics Libraries delegating to us, developers, the responsibility by everything related to Cameras, Lights, Materials  and Effects. And we can do this working with the famous Shaders. So every time you hear about "programmable pipeline" think in Shaders!</p>

<p>But now, what the hell is Shaders?</p>

<p>Shaders is like little pieces of codes, just like little programs, working directly in the GPU to make complex calculations. Complex like: the final color of a surface's point which has a T texture, modified by a TB bump texture, using a specular color SC with specular level SL, under a light L with light's power LP with a incidence angle LA from distance Z with falloff F and all this seeing by the eyes of the camera C located on P position with the projections lens T.</p>

<p>Whatever this means, it's much complex to be processed by the CPU and is so much complex to Graphics Libraries continue to care about. So the programmable pipeline is just us managing that kind of thing.</p>

<p>And the fixed pipeline?</p>

<p>It's the inverse! The fixed pipeline is the Graphics Library caring about all that kind of things and giving to us an API to set the Cameras, Materials, Lights and Effects.</p>

<p>To create shaders we use a language similar to C, we use the OpenGL Shader Language (GLSL). OpenGL ES use a little more strict version called OpenGL ES Shader Language (also known as GLSL ES or ESSL). The difference is that you have more fixed functions and could write more variables in GLSL than in GLSL ES, but the syntax is the same.</p>

<p>Well, but how did works these shaders?</p>

<p>You create them in a separated files or write directly in your code, whatever, the important thing is that the final string containing the SL will be sent to the OpenGL's core and the core will compile the Shaders to you (you even can use a pre-compiled binary shaders, but this is for another part of this serie).</p>

<p>The shaders works in pairs: Vertex Shader and Fragment Shader. This topic needs more attention, so let's look closely to the Vertex and Fragment shaders. To understand what each shader does, let's back to the cube example.</p>

<p><img src='http://db-in.com/images/shaders_example.gif'  alt="3D cube to illustrate VSH and FSH" title="shaders_example" width="600" height="549" class="size-full wp-image-779" /></p>

<p><br/>  </p>

<h3>Vertex Shader</h3>  

<p>Vertex Shader, also known as VS or VSH is a little program which will be executed at each Vertex of a mesh. <br />
Look at the cube above, as I said early, this cube needs 8 vertices (now in this image the vertex 5 is invisible, you will understand why shortly). <br />
So this cube's VSH will be processed 8 times by the GPU.</p>

<p>What Vertex Shader will do is define the final position of a Vertex. Do you remember that programmable pipeline has left us the responsible by the camera? So now is it's time!</p>

<p>The position and the lens of a camera can interfere in the final position of a vertex. Vertex Shader is also responsible to prepare and output some variables to the Fragment Shader. In OpenGL we can define variable to the Vertex Shader, but not to the Fragment Shader directly. Because that, our Fragment's variables must pass through the Vertex Shader.</p>

<p>But why we don't have access to the Fragment Shader directly? <br />
Well, let's see the FSH and you will understand.</p>

<p><br/>  </p>

<h3>Fragment Shader</h3>  

<p>Look at the cube image again. <br />
Did you notice vertex 5 is invisible? This is because at this specific position and specific rotation, we just can see 3 faces and these 3 faces are composed by 7 vertices.</p>

<p>This is what Fragment Shader does! FSH will be processed at each VISIBLE fragment of the final image. Here you can understand a fragment as a pixel. But normally is not exactly a pixel, because between the OpenGL's render and the presentation of the final image on the device's screen has stretches. So a fragment can result in less than a real pixel or more than a real pixel, depending on the device and the render's configurations. In the cube above, the Fragment Shader will be processed at each pixel of that three visible faces formed by 7 vertices.</p>

<p>Inside the Fragment Shader we will work with everything related to the mesh' surface, like materials, bump effects, shadow and light effects, reflections, refractions, textures and any other kind of effects we want. The final output to the Fragment Shader is a pixel color in the format RGBA.</p>

<p>Now, the last thing you need to know is about how the VSH and FSH works together. It's mandatory ONE Vertex Shader to ONE Fragment Shader, no more or less, must be exactly one to one. To ensure we'll not make mistakes, OpenGL has something called <strong>Program</strong>. A Program in OpenGL is just the compiled pair of VSH and FSH. Just it, nothing more.</p>

<p><br/>  </p>

<h2><strong>Conclusion</strong></h2>  

<p>Very well!</p>

<p>This is all about the OpenGL's concepts. Let's remember of everything.</p>

<ol>  
    <li>OpenGL's logic is composed by just 3 simple concepts: Primitives, Buffers and Rasterize.
<ul>  
    <li>Primitives are points, lines and triangles.</li>
    <li>Buffers can be Frame Buffer, Render Buffer or Buffer Objects.</li>
    <li>Rasterize is the process which transform OpenGL mathematics in the pixels data.</li>
</ul>  
</li>  
    <li>OpenGL works with fixed or programmable pipeline.
<ul>  
    <li>The fixed pipeline is old, slow and large. Has a lot of fixed functions to deal with Cameras, Lights, Materials and Effects.</li>
    <li>The programmable pipeline is more easy, fast and clean than fixed pipeline, because in the programmable way OpenGL let to us, developers, the task to deal with Cameras, Lights, Materials and Effects.</li>
</ul>  
</li>  
    <li>Programmable pipeline is synonymous of Shaders: Vertex Shader, at each vertex of a mesh, and Fragment Shader, at each VISIBLE fragment of a mesh. Each pair of Vertex Shader and Fragment Shader are compiled inside one object called Program.</li>
</ol>

<p>Looking at these 3 topics, OpenGL seems very simple to understand and learn. Yes! It is very simple to understand... but to learn... hmmm... <br />
The 3 little topics has numerous ramifications and to learn all about can take months or more.</p>

<p>What I'll try to do in the next two parts of this serie is give to you all what I've learned in 6 immersive months of deeply hard OpenGL's study. In the next one, I'll will show you the basic functions and structures of a 3D application using the OpenGL, independently of which programming language you are using or which is your final device.</p>

<p>But before it, I want to introduce you one more OpenGL's concept.</p>

<p><br/>  </p>

<h2><strong>OpenGL's Error API</strong></h2>  

<p>OpenGL is a great State Machine working as a Port Crane and you don't have access to what happen inside it. So if an error occurs inside it, nothing will happens with your application, because OpenGL is a completely extern core.</p>

<p>But, how to know if just one of your shaders has a little error? How to know if one of your render buffers is not properly configured?</p>

<p>To deal with all the errors, OpenGL gives to us an Error API. This API is very very simple, it has few fixed function in pairs. One is a simple check, Yes or Not, just to know if something was done with successful or not. The other pair is to retrieve the error message. So is very simple. First you check, very fast, and if has an error then you get the message.</p>

<p>Generally we place some checks in critical points, like the compilations of the shaders or buffers configurations, to stay aware about the most communs errors.</p>

<p><br/>  </p>

<h2><strong>On the Next</strong></h2>

<p>OK, dude, now we are ready to go. <br />
At next tutorial let's see some real code, prepare your self to write a lot.</p>

<p>Thanks for reading and see you in the next part!</p>

<p><strong>NEXT PART:</strong> <a href='http://blog.db-in.com/all-about-opengl-es-2-x-part-2'  target="_blank">Part 2 - OpenGL ES 2.0 in deeply (Intermediate)</a></p>

<iframe scrolling="no" src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  width="100%" height="130px"></iframe>]]></description><link>http://blog.db-in.com/all-about-opengl-es-2-x-part-1/</link><guid isPermaLink="false">db4d1e72-cc05-401f-bdc4-76ef64604dd2</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Tue, 04 Feb 2014 01:44:45 GMT</pubDate></item><item><title><![CDATA[Objective-C Compiling ARC and MRR]]></title><description><![CDATA[<p><img src='http://db-in.com/images/arc_mrr_feature.png'  alt="" title="Binary world" width="200" height="200" class="alignleft size-full" />Hi fellows!</p>

<p>This article will treat article the myth and truths behind iOS ARC (Automatic Referece Counting) and MRR (Manual Retain Release). Let's understand how it works, what the compiler does with it, why some people love it and other hate it. Also, my favorite part, let's see how to create a cross ARC/MRR application! When working on team, usually in agencies and companies we have a Senior developer (which usually prefers MRR) and a Junior developer (which prefers ARC) working on the same project. Using this technique both can work happy together.</p>

<!--more-->

<p><br/>  </p>

<h2><strong>At a glance</strong></h2>  

<p>It's important to say that this is not an ARC tutorial. I'll not teach you how to use ARC at all. To continue reading you must already know what ARC is. This article is to show you how to integrate ARC and MRR in the same application, or is better to say, how to make a code that compiles in ARC and in MRR environment without changing one single line of code.</p>

<p>First off, we must understand how the ARC goes on, what it really means to the final compiled code. Of course, if you're a Senior dev, you can just skip this introduction and go to the funny part here.</p>

<p>Here is the sample Xcode project for this tutorial: <br />
<a href='https://github.com/dineybomfim/arc-mrr/archive/master.zip'  onmousedown="_gaq.push(['_trackEvent', 'Obj-C ARC+MRR', 'Xcode', 'Download']);"><img class="alignleft" title="download" src='http://db-in.com/imgs/download_button.png'  alt="Download Xcode project files to iPhone"/> <br />
<strong>Download now</strong> <br />
Xcode project files to iOS 5.1 or later <br />
4.0Mb <br />
</a><br/> <br />
Github: <a href='https://github.com/dineybomfim/arc-mrr/'  target="_blank">https://github.com/dineybomfim/arc-mrr/</a></p>

<p>OK. Let's get started.</p>

<p><br/>  </p>

<h2><strong>The ARC concept</strong></h2>  

<p>The first thing you must to understand about ARC is: it's sucks! The ARC concept to save your time coding is not really a big deal for who really knows to manage the memory of the application. In fact, there is no real time save using ARC, the point about it is: for developers that came from other languages that support Gargabe Collection, the memory management concept is really a very hard part to understand.</p>

<p><strong>So here is the real point of Apple creating ARC: learning Obj-C for those ones that never saw any kind of memory management is really painful!</strong></p>

<p>I can say that because I walked through this path. I started my developer life with very high ending languages, Server Side languages, Java Script and this kind of shit. So when I meed C the concept of memory management was something really new with no relation with my background. Thinking like Apple, increase the group of Obj-C developer is very important and then ARC concept comes in to solve a big hole in the issue about engaging new developers, increasing the community.</p>

<p>Some may ask about Garbage Collection. Well, Mac OS has Garbage Collection and MRR as well. But for iOS, running on very tiny and limited devices the life battery matter is the biggest problem. Garbage Collection on iOS could really kill the battery life so this is completely out of the question. Running something on background all the time just to make sure the memory management is on the track is not possible on mobiles.</p>

<p>The ARC concept is very simple: developers don't need to think about the memory management because we (Apple) will review his code, looking for all the places where the memory should be allocated and released. Then we (Apple) will place the instructions about retaining, releasing or even autoreleasing in the right spots. All this job will be made during the compilation of the code. So every time the developer hit "compile" we'll review his code first. No costs for the application on runtime. It's a very great idea, doesn't it?</p>

<p>Yeah, the theory is perfect, but not the practice. Memory management is something extremely complex and has a lot of collateral effects in the entire code. So to make sure the Apple ARC is not doing shit, they ask to the developer to give some instructions, like saying this variable is "strong", that means, I'll need it over a very long time and this variable is "weak", that means, I just need it for a short period of time.</p>

<p>There are much more. The developers also need to say when a basic C variable is cast to an Obj-C variable what kind of conversion is involved, because the ARC is just for Obj-C, not for pure C. <br />
O.o <br />
Oh God...</p>

<p>The high Obj-C framework is far for controlling everything. If you need anything more specific you'll face the low level Apple frameworks, that is pure C. Actually I love C. I'm used to say that here is no real iOS or Mac application made with pure Obj-C. You'll always mix Obj-C and pure C, because in fact Obj-C is a C (a superset of C).</p>

<p>So, I think you already got my point. In very simple questions: Apple had a good intention creating ARC. However this is just for beginners. If you want to make a little bit better application you'll need to learn a lot of concepts of memory management. You'll need to learn about C pointers and MRR any way. Even if you choose for keep using ARC, you'll need to learn how it really works.</p>

<p><br/>  </p>

<h2><strong>ARC - Behind the scenes</strong></h2>  

<p>OK, no more shit talking, let's go to the action. Let's understand what goes behind the ARC.</p>

<table width="675">  
<tbody>  
<tr>  
<th>ARC code</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">

@interface ClassA : NSObject
{
@private
    NSDictionary *_myDict;
}

- (void) myMethod;
@end

@implementation ClassA

- (void) myMethod
{
    NSMutableArray *array = [[NSMutableArray alloc] init];

    unsigned int i, length = 10;
    for (i = 0; i < length; ++i)
    {
        [array addObject:[[NSString alloc] initWithFormat:@"%i",i]];
    }

    _myDict = [[NSDictionary alloc] initWithObjectsAndKeys:array, @"keyArray", nil];
}

@end

</pre>

<p>The above is what you code, but this is what really goes on to the compiler:</p>

<table width="675">  
<tbody>  
<tr>  
<th>TRUE ARC code</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">

@interface ClassA : NSObject
{
@private
    __strong NSDictionary *_myDict;
}

- (void) myMethod;
@end

@implementation ClassA

- (void) myMethod
{
    NSMutableArray *array = [[NSMutableArray alloc] init];

    unsigned int i, length = 10;
    for (i = 0; i < length; ++i)
    {
        NSString *__scopeVar1 = [[NSString alloc] initWithFormat:@"%i",i];
        [array addObject:__scopeVar1];
        [__scopeVar1 release];
        __scopeVar1 = nil;
    }

    if (_myDict != nil)
    {
        [_myDict release];
        _myDict = nil;
    }
    _myDict = [[NSDictionary alloc] initWithObjectsAndKeys:array, @"keyArray", nil];

    if (array != nil)
    {
        [array release];
        array = nil;
    }
}

- (void) dealloc
{
    if (_myDict != nil)
    {
        [_myDict release];
        _myDict = nil;
    }

    [super dealloc];
}

@end

</pre>

<p>WOW, much more code, it's true! However it's much more understandable to see what is going on with the application memory.</p>

<p>Notice that variables inside a scope will always die, I mean, be released inside the same scope. The "<strong>array</strong>" for example will die in the end of its scope and the "<strong>__scopeVar1</strong>" will die in the end of its own scope. It's important to say that the ARC will always prefer the direct "<em>release</em>" instead of "<em>autorelease</em>", because it's better, fasters and saves more memory. In fact, the "<em>autorelease</em>" is used in very few situations by ARC, like the returning of a function/method, in this case there is no other way than autoreleasing the instance. Let's see more about "<em>autorelease</em>" soon.</p>

<p>Other thing I want to call the attention is how ARC will release the variables. ARC is prepared to use what we call "<em>safe release</em>", that means, the variable will always be set to "<em>nil</em>" after being released. This is a safety way to avoid Zombies. Don't know what is it? Zombies is where an instance receives a double "<em>release</em>" command. So if an instance was already released and receives another "<em>release</em>" message the application will crash. I'll explain it better right next, now just keep in mind that by checking if the variable is not "<em>nil</em>" and then setting it to "<em>nil</em>" right after releasing it is a safe way to avoid Zombies. Of course there are many other ways to get a Zombie, however by using this method you can avoid a lot of them.</p>

<p>More about ARC in my Obj-C Memory article.</p>

<p><br/>  </p>

<h2><strong>ARC + MRR = S2</strong></h2>  

<p>OK fellows, here is what you're looking for. The problem today is, like I said, the team work, when a senior developer wants to use MRR and a Junior just know about the ARC. I like to create a global file called <strong>Runtime.h</strong>, it will hold all the runtime related things, in majority, C macros.</p>

<p>Here is what you need.</p>

<table width="675">  
<tbody>  
<tr>  
<th>Runtime.h</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">

/*
 *    ARC+MRR.h
 *    ARC+MRR
 *    
 *    Created by Diney Bomfim on 4/28/13.
 *    Copyright 2013 db-in. All rights reserved.
 */

// Defines the ARC instructions.
#if __has_feature(objc_arc)

    // ARC definition.
    #define IS_ARC

    // Convertion instructions.
    #define ARC_UNSAFE          __unsafe_unretained
    #define ARC_BRIDGE          __bridge
    #define ARC_ASSIGN          __weak
    #define ARC_RETAIN          __strong

    // Property definitions
    #define RETAIN              strong
    #define ASSIGN              weak
    #define COPY                copy

#else

    // Convertion instructions.
    #define ARC_UNSAFE
    #define ARC_BRIDGE
    #define ARC_ASSIGN
    #define ARC_RETAIN

    // Property definitions
    #define RETAIN              retain
    #define ASSIGN              assign
    #define COPY                copy

#endif

// The retain routine.
#ifdef IS_ARC
    #define arcRetain(x)        (x)
#else
    #define arcRetain(x)        ([x retain])
#endif

// The release routine.
#ifdef IS_ARC
    #define arcRelease(x)       ({ (x) = nil; })
#else
    #define arcRelease(x)       ({ if(x) { [x release]; (x) = nil; } })
#endif

// The autorelease routine.
#ifdef IS_ARC
    #define arcAutorelease(x)   (x)
#else
    #define arcAutorelease(x)   ([x autorelease])
#endif

// The free routine, not really necessary for ARC, but let's do it to make a safe free as well.
#define nppFree(x)              ({ if(x) { free(x); (x) = NULL; } })

</pre>

<p>That's it! <br />
I prefer to use the terms "RETAIN" and "ASSIGN", because they sounds more correctly to me. Of course you can change it to "STRONG" and "WEAK" respectively if you want. Anyway, the point it how to use it? Here:</p>

<table width="675">  
<tbody>  
<tr>  
<th>Using the Runtime.h</th>  
</tr>  
</tbody>  
</table>  

<pre class="brush:cpp">

@interface ClassA : NSObject
{
@private
    NSDictionary *_myDict;
    ARC_ASSIGN NSString *_tempVar;
}

@property (nonatomic, RETAIN) NSString *strongProperty;
@property (nonatomic, ASSIGN) NSString *weakProperty;
@property (nonatomic, COPY) NSString *copyProperty;

- (void) myMethod;
@end

@implementation ClassA

- (void) myMethod
{
    NSMutableArray *array = arcAutorelease([[NSMutableArray alloc] init]);

    unsigned int i, length = 10;
    for (i = 0; i < length; ++i)
    {
        NSString *__scopeVar1 = [[NSString alloc] initWithFormat:@"%i",i];
        [array addObject:__scopeVar1];
        arcRelease(__scopeVar1);
    }

    arcRelease(_myDict);
    _myDict = [[NSDictionary alloc] initWithObjectsAndKeys:array, @"keyArray", nil];
}

@end

</pre>

<p>There are other usages of these definitions. I'll not show all here to don't take too much of your time. In this tutorial there is a sample project, you can get it and see all the other usages. Check how it's simple to use it. You can turn OFF or ON the Objective-C ARC compiler's feature without changing even one single line of code!</p>

<p><br/><a name="conclusion"></a>  </p>

<h2><strong>Conclusion</strong></h2>  

<p>Very well my friends, that's it. Let's make a fast concept review:  </p>

<ol>  
    <li>ARC is not a savior! It's just a way that Apple founds to avoid loosing near devs.</li>
    <li>ARC just make a very simple work during the compilation, not a big deal.</li>
    <li>Using macros we can easily "undo" what ARC "does". This is good for large team works and specially when making frameworks.</li>
</ol>

<p>Remember, ARC will not save your ass all the time. You must understand what goes under the code to avoid leaks and zombies. I've seen junior developers getting zombies even with ARC! Trust me, it's much more easy than you can image.</p>

<p>If you have any doubts, just Tweet me:  </p>

<script src='http://platform.twitter.com/widgets.js'  type="text/javascript"></script>  

<p><a href='http://twitter.com/share?&amp;url=&amp;text=@dineybomfim' &amp;related=dineybomfim" class="twitter-share-button" data-related="dineybomfim" data-text="@dineybomfim " data-count="none" data-url="">Tweet</a> </p>

<p>See you soon!</p>

<p><a href='https://github.com/dineybomfim/arc-mrr/archive/master.zip'  onmousedown="_gaq.push(['_trackEvent', 'Obj-C ARC+MRR', 'Xcode', 'Download']);"><img class="alignleft" title="download" src='http://db-in.com/imgs/download_button.png'  alt="Download Xcode project files to iPhone"/> <br />
<strong>Download now</strong> <br />
Xcode project files to iOS 5.1 or later <br />
4.0Mb <br />
</a><br/> <br />
Github: <a href='https://github.com/dineybomfim/arc-mrr/'  target="_blank">https://github.com/dineybomfim/arc-mrr/</a></p>

<iframe scrolling="no" src='http://db-in.com/downloads/apple/tribute_to_jobs.html'  width="100%" height="130px"></iframe>]]></description><link>http://blog.db-in.com/objective-c-compiling-arc-and-mrr/</link><guid isPermaLink="false">6fb4b847-5b44-4685-930b-ef37c9bb1876</guid><dc:creator><![CDATA[Diney Bomfim]]></dc:creator><pubDate>Mon, 03 Feb 2014 09:49:42 GMT</pubDate></item></channel></rss>