Mastering Mobile App Development: The Importance of System Analysis — Drawing Diagrams

When developing mobile apps, it’s important to find a balance between solving current issues and preparing for future changes. This balance involves designing the system in a way that can easily accommodate future needs while keeping it organized and avoiding unnecessary features.

To achieve success, there are several important factors to consider. First, it’s crucial to understand the needs of the customers. This involves gathering information about what the customers want and expect from the app. Secondly, establishing clear lines of communication with the customers is vital. It’s important to have open and effective communication channels to discuss requirements, provide updates, and address any concerns or feedback.

Lastly, fostering mutual understanding is key. Developers should work towards building a shared understanding with the customers regarding the app’s functionality, design, and objectives. This helps ensure that the final product meets the customer’s requirements and aligns with their vision.

Now, let’s move on to discussing the importance of system analysis in mobile app development and highlighting the key considerations for developers:

Clear Communication with Customer(Product/Analyst): 
When a customer brings you a new feature or changes, understanding their needs is crucial. Communication is your best friend in this process. Sometimes, developers might hesitate to ask questions when faced with unclear visions or lackluster requirements. But here’s the thing: asking questions is key to ensuring everyone is on the same page.

So, let’s remember to ask ourselves a few important questions:

  • What value does this feature bring to customers? Understand what is going on.
  • What’s the purpose behind implementing it? Eliminate assumptions.
  • How should we handle different scenarios, such as success and error cases? And how does caching fit into the equation?

To bridge the communication gap, it’s essential to have direct conversations with analysts. An effective approach is Behavior-Driven Development (BDD), where meaningful discussions take precedence over getting bogged down by tools. By focusing on delivering maximum value through BDD, the entire team can align with customer needs.

Considerations for Stellar System Analysis: 
Now, let’s zoom in on some crucial factors to consider during system analysis:

  1. Backward Compatibility: Take a moment to think about how your app will affect loyal customers using previous versions. 
  2. Offline Support: How you can support offline functionality. Should data be cached? And how can you gracefully handle situations where the internet connection is weak or nonexistent?
  3. Stability and Scalability: It’s important to evaluate the stability of your feature. Consider the number of users expected to utilize it. Moreover, explore the idea of implementing an automatic request scheduler for frequently visited pages to ensure smooth performance.
  4. Collaboration for Success: Bringing the entire team together backend and frontend developers, and mobile developers — is crucial for effective decision-making. Even if different programming languages are involved, aligning the approach across teams fosters efficient development. Remember, clear communication is vital to keep the team motivated and on the same track.

Building Effective Architecture through Communication: 
To facilitate system analysis, consider using BDD, creating use cases, and utilizing flowcharts. These tools help establish a shared understanding of requirements. Keep in mind that investing time in refining requirements pays off in preventing costly mistakes during development. And all the above tools only matter when the aim is not to think about which tool to use, but instead, think about the value that the feature brings the customer, and dig deep. 

Boosting Value through Effective Communication: 
When a team excels at communicating needs to one another and across different teams, everyone benefits! You’ll find more satisfaction in your work and increase your team’s value, ultimately leading to reduced costs for the business.

Understanding use cases, including various scenarios and offline support, empowers your team to make informed decisions. Visual representations, such as quick diagrams, can work wonders in conveying ideas among developers. Once the whole team is on the same page for the feature and its needs, then it is time for the system analysis. There is no good system design with a poor understanding of the needs of the customer. Good architecture starts with good communication. 

Now that we’ve established effective communication within the team, it’s time to dive into the analysis and design phase.

Drawing Diagrams — Modular Design

Let’s take a practical example: Suppose we need to display the latest user posts, similar to an Instagram feed. After aligning with the product requirements, we understand that we’ll be working on the following use cases:

  1. Happy Path: Displaying the user’s feed without any issues.
  2. Offline Mode: Handling the situation when the user is offline by implementing a caching mechanism to store and display previously fetched data.
  3. Low Internet Connectivity: Addressing scenarios where the user has a poor network connection. We need to define error case scenarios to handle these situations gracefully (Sad path).

Considering these use cases, here are some additional technical requirements we need to think about:

  • Offline Support: Implementing a caching mechanism to store posts locally and refresh the cache every two minutes (Cache policy).
  • Automatic Request Scheduler: Even if the user doesn’t manually pull to refresh, we can add an automatic request scheduler to periodically fetch new posts.
  • Public Access: Allowing users who are not logged in to still view public posts (Authorization Decorator Pattern).
  • Backend Synchronization: Collaborating with the backend team, we might need to implement a circuit breaker operation to handle excessive requests and ensure the system remains stable (Using a circuit breaker approach).

Align with designers and backends to have a boundary contract between them: 
At this stage, you can understand the components that the designer will give you: Such as on the page, there will be posts, posts will have images(UIImage), titles(labels), comments(collectionView), etc. Without seeing the whole design you can agree on the UI components, and can prepare a boundary contract that you will get from remote API. You may provide an example response JSON data to the backend team so that they will understand better, and there will be no room left for assumptions.

JSON contract with the backend team:

endpoint: get/feed/userId: UUID
status: 200
success data:

"postID": "an UUID",
"comments": [], //optionals
"imageURLs": ["https:…","https:…"]}]

Here you can differentiate the optional and required parameters with UI and backend team.

After agreeing on the UI components with designers and sharing the example response JSON data with the backend team, now it is time to work with the mobile team.

Pairing up with team members:

Before diving into coding, it’s good practice to pair up with a team member. For instance, one person can focus on the UI part while the other works on the API or data persistence. Here to align with the teammate, protocols could be used as a boundary contract between modules. During system analysis, to prevent any misunderstandings both team members should agree on the abstract representation of the code.

Let’s settle on the boundary contract with the teammates for the posts so that the team can easily work independently:


  • imageUrls: An array of strings representing the URLs of the post images.
  • postId: A string identifier for the post.
  • likeCount: An integer indicating the number of likes for the post.
  • comments: An array of comments associated with the post.


  • commentId: A string identifier for the comment.
  • commentedUserId: The ID of the user who made the comment.
  • comment: The actual comment text.

During your pairing session, you may explore further details, such as:

  • SSL Pinning: Considering SSL pinning to match certificates for enhanced security.
  • APICircuitBreaker: Implementing a retry mechanism to prevent overloading and system failure. The circuit breaker monitors the health of the API and temporarily stops sending requests if issues are detected. This allows the system to recover and prevents further problems.
  • Automatic Request Scheduler: Developing a system that automatically sends HTTP requests at specified intervals, such as every two minutes. This ensures our app remains up-to-date with the latest posts without relying on manual pull-to-refresh actions.

By discussing and clarifying these details during the system analysis phase, you and your team can establish a solid foundation for the upcoming development tasks. 

Finally, when the development has been done, the code should merge and work altogether, since the team abided by the protocols that were written in the design phase. 
Here is how the design could be:

system analysis

During the system analysis and design phase, it’s essential to visualize the architecture of your mobile app. In the drawing, I’ve assigned different colors to each module, signifying their independence and deployability. This approach aligns perfectly with the principles of clean architecture.

Clean architecture promotes modularity by ensuring that higher-level modules do not depend on lower-level ones, and vice versa. Instead, all modules rely on abstractions, creating a clear separation of concerns. This decoupling allows for flexibility and maintainability, as changes made to one module do not impact others. So it enables the whole team to work independently for the same feature.

By designing the app with independent and deployable modules, we establish a solid foundation for scalability and extensibility. Each module serves a specific purpose and can be modified or replaced without affecting the overall system. This modularity enhances testability and enables seamless integration of new features or updates in the future.

Keep in mind that when creating the architecture diagram, it’s important to remember that its purpose is to have clear communication within the team before diving into the code.

Drawing the architecture diagram should be a quick process, taking around 10 to 15 minutes at most. Its main goal is to establish a shared understanding among team members and ensure everyone is on the same page regarding the app’s structure and module dependencies.

Using the diagram as extensive documentation can be costly and burdensome. Software is constantly evolving, and relying solely on a static diagram for documentation can quickly become outdated and difficult to maintain. Instead, it’s better to view the diagram as a starting point, allowing for flexibility and adaptability as the software evolves.

By keeping the diagram as a sketch or visual aid, you can save time and effort. The team can see how the communication will occur through the modules, also possible problems can be detected beforehand (For example: retain cycles). Diagram will be a helpful tool for initial communication and collaboration, ensuring a solid foundation for the development process.

Clean architecture is all about modularity and flexibility. Divide your code into independent modules that can be easily switched out or reused across different applications. Remember, there’s no one-size-fits-all solution! Tailor your architecture and system analysis to meet the unique needs of each app. The more you explore different approaches, the broader your range of options becomes for addressing app-specific requirements.

After you have the design, where should you start to code?
The design will keep changing, and starting from the design will lead to deleting/updating of the codes, eventually demotivation of the team.
First create the boundary contracts that all the team will abide by then start with the highest level, the one you have the most information.

In conclusion, system analysis is a powerful tool in mobile app development. It enables you to understand and fulfill customer needs effectively. By prioritizing clear communication, embracing collaboration, and following solid architectural principles, you’ll craft efficient, scalable, and maintainable mobile applications. So go forth and conquer the world of app development!



Composition Layer

Let’s say you have two services: a remote service that fetches data from a server, and a local service that retrieves data from local storage. In this scenario, the goal is to ensure that users see the latest data if there is an internet connection. However, if there isn’t, they can still access the data stored locally on their device. To achieve this, we need to come up with composition layers, focusing on the separation of concerns and achieving the desired behavior:

  1. Remote Service(RemoteItemsLoader): This layer encapsulates the functionality to fetch data from a remote service. It handles network requests, communicates with the remote API, and retrieves the most updated data. It provides a method, such as fetchItems(completion:), which retrieves data from the remote service asynchronously.
  2. Local Service(LocalItemsLoader): This layer handles the retrieval of cached data from the local storage. It abstracts the details of accessing the local cache and provides a method, such as fetchCachedItems(completion:), to retrieve the cached data asynchronously.
  3. Data Fetcher(RemoteItemsLoaderWithLocalFallbackService): This composition layer coordinates the remote and local services to provide the required behavior. It is responsible for deciding whether to fetch data from the remote service or use the local cached data based on the availability of a network connection.
  • When data is requested, the Data Fetcher first checks if a network connection is available.
  • If a network connection is available, the Data Fetcher calls the fetchItems(completion:) method on the Remote Service to retrieve the most updated data from the remote source.
  • If there is no network connection, the Data Fetcher falls back to the fetchCachedItems(completion:) method on the Local Service to retrieve the cached data from the local storage.
  • The Data Fetcher then returns the fetched items (either from the remote service or the local cache) or any errors to the caller through completion handlers or appropriate result types.

By separating the responsibilities into composition layers, you achieve a clear separation of concerns:

  • The Remote Service focuses on fetching data from the remote source, handling network communication, and retrieving the most updated data.
  • The Local Service focuses on retrieving data from the local cache, abstracting the details of local storage access.
  • The Data Fetcher acts as the orchestrator, deciding whether to use the remote service or the local cache based on network availability.

This composition-based approach allows for easy extensibility and modularity. You can easily swap out different implementations of the Remote Service and Local Service without affecting the Data Fetcher or other parts of the code. This adheres to the principles of separation of concerns, modularity, and encapsulation.

class RemoteItemsLoader {
// Remote items loader implementation
func fetchItems(completion:)

class LocalItemsLoader {
// Local items loader implementation
func fetchCachedItems(completion:)

class RemoteItemsLoaderWithLocalFallbackService {
private let remoteLoader: RemoteItemsLoader
private let localLoader: LocalItemsLoader

init(remoteLoader: RemoteItemsLoader, localLoader: LocalItemsLoader) {
self.remoteLoader = remoteLoader
self.localLoader = localLoader

func loadItems() -> [Item] {
if isNetworkAvailable() {
return remoteLoader.fetchItems()
} else {
return localLoader.fetchCachedItems()

private func isNetworkAvailable() -> Bool {
// Check if network is available
// Implement your network availability check logic here
return true

In the above code, we have the RemoteItemsLoader and LocalItemsLoader classes, which represent the remote and local item loaders. The RemoteItemsLoaderWithLocalFallbackService class composes these two loaders and provides a loadItems() method to retrieve the items.

During initialization, you inject instances of RemoteItemsLoader and LocalItemsLoader into the RemoteItemsLoaderWithLocalFallbackService. This allows you to easily switch between different implementations without modifying the code that uses RemoteItemsLoaderWithLocalFallbackService, adhering to the Open/Closed principle.

In the loadItems() method, the network availability is checked using the isNetworkAvailable() method. If the network is available, it calls the fetchItems() method on the remoteLoader instance to retrieve the items. Otherwise, it calls the fetchCachedItems() method on the localLoader instance to get the items.

With this composition-based approach, you can easily swap out different implementations of RemoteItemsLoader and LocalItemsLoader without modifying the code that uses RemoteItemsLoaderWithLocalFallbackService. This promotes the Open-Closed principle by allowing extension and customization of behavior without directly modifying existing code.


I would like to thank Caio & Mark and the entire EssentialDeveloper community. 🎉

Understanding Swift Performance

Understanding Swift Performance

Understand the implementation to understand performance

Dimensions of Performance

  1. When you are building an abstraction and choosing an abstraction mechanism you should be asking yourself, “Is my instance going to be allocated on the stack or the heap?” Allocation (Heap-or-Stack)
  2. When I pass this instance around, how much reference counting overhead am I going to occur? Reference Counting(Less-or-More)
  3. When I call a method on this instance, is it going to be statically or dynamically dispatched? Method Dispatch(Static or Dynamic)

If we want to write fast Swift code, we’re going to need to avoid paying for dynamism and runtime that we are not taking advantage of. And we’re going to need to learn when and how we can trade between these 3 different dimensions for better performance.

Swift allocates and deallocates the memory on your behalf, some of that memory allocates in the stack, some of them in the heap.
– The stack is a really simple data structure, you can push onto the end of the stack and pop off the end from the stack. Because you can only add or remove to the end of the stack, we can implement push and pop just by keeping a pointer to the end of the stack (a.k.a. Stack Pointer).


  • Stack
    • Decrement stack pointer to allocate (When we call into a function, we can allocate that memory that we need just trivially decrementing the stack pointer to make space.)
    • Increment stack pointer to deallocate. (Then after executing the function, we deallocate that memory just by incrementing the pointer back up to where it was before we called the function)
    • Allocating and deallocating from the stack is fast. It is literally the cost of assigning an integer.
  • Heap
    • Advanced Data Structure (The Heap is more dynamic but less efficient than the Stack. It lets you do things that Stack can’t, like, allocate the memory with a dynamic lifetime. But, that requires a more advanced data structure.)
    • Search for unused block of memory to allocate
    • Reinsert block of memory to deallocate
    • Heap costs more than just assigning an integer like Stack did.
    • Thread safety overhead. (Because multiple threads can be allocating memory on the heap at the same time, the heap needs to protect its integrity by using locking or other synchronization mechanisms.)
  • Examples:
    • There is a struct Point. Whenever I create an instance of Point, since it is a value type, I create it via Stack (Value Semantics).
    • There is a class Point. Whenever I create an instance of Point, since it is a reference type, I create it via Heap, locks the heap creates it which might lead to an unintended share of state (Reference Semantics). Deallocation goes with first locking the heap, and retraining the unused block to the appropriate position.

🚀 Classes are more expensive to construct than structs because classes require a heap allocation. Because classes are allocated on the heap and have reference semantics, classes have some powerful characteristics like identity and indirect storage. But, if we don’t need those characteristics for abstraction, we’re going to better use a struct.

🚀 Structs aren’t prone to the unintended sharing of state like classes are.

🚀 Let’s say I have to cache the name of uiimages, so that compiler wouldn’t have to create the same images over and over. It would not be smart to cache the names with a String like this: let key = "\(color):\(oriantation):\(name)"

String isn’t particularly a strong type for this key. I’m using it to represent this configuration space, but I could just as easily put the name of my dog in that key. So, not a lot of safety there. Also, String can represent so many things because it actually stores the contents of its characters indirectly on the heap. So, that means every time we’re calling into this function, even if we have a cache hit, we’re incurring a heap allocation.

In Swift, we can represent this configuration space of color, orientation, and name just by using a struct. This is a much safer way to represent this configuration space than a String. And because structs are first class types in Swift, they can be used as the key in our dictionary.

struct Attributes {
    var color: Color // enum type
    var orientation: Orientation // enum type
    var name: Name //some custom enum type

Now the key can be below as structs are first class types in Swift, 
they can be used as the key in our dictionary:
let key = "let key = Attributes(color: color, orientation: orientation, name: name)"

Now, when we call the above function, if we have a cache hit, there’s no allocation overhead because constructing a struct like this attributes one, doesn’t require any heap allocation. It can be allocated on the stack. So, this is a lot safer and it’s going to be a lot faster.

Reference Counting

Swift keeps a count of the total number of references to any instance on the heap. And it keeps it on the instance itself. When you add a reference or remove a reference, that reference count is incremented or decremented. When that count hits zero, Swift knows no one is pointing to this instance on the heap anymore and it’s safe to deallocate that memory.

The key thing to keep in mind with reference counting is this is a really frequent operation and there’s actually more to it than just incrementing and decrementing an integer. First, there are a couple levels of indirection involved to just go and execute the increment and decrement. But, more importantly, just like with heap allocation, there is thread safety to take into consideration because references can be added or removed to any heap instance on multiple threads at the same time, we actually have to atomically increment and decrement the reference count. And because of the frequency of reference counting operations, this cost can add up.

There’s more to reference counting than incrementing, decrementing

  • Indirection
  • Thread safety overhead
  • Examples:
    • There is a class Point. Let’s assume I created an instance of Point which is point1, now point1 has gained an additional property, refCount. And we see that Swift has added a couple calls to retain and a couple calls to release. Retain is going to atomically increment our reference count and release is going to atomically decrement our reference count.
    • In this way, Swift will be able to keep track of how many references are alive to our point on the heap. And if we trace through this quickly, we can see that after constructing our point on the heap, it’s initialized with a reference count of one because we have one live reference to that point. As we go through our program and we assign point1 to point2, we now have two references and so Swift has added a call to atomically increment the reference count of our point instance. As we keep executing, once we’ve finished using point1, Swift has added a call to atomically decrement the reference count because point1 is no longer really a living reference as far as it’s concerned. Similarly, once we’re done using point2, Swift has added another atomic decrement of the reference count. At this point, no more references are making use of our point instance, so Swift knows it’s safe to lock the heap and return that block of memory to it.
    • There is a struct Point. Well, when we constructed our point struct, there was no heap allocation involved. When we copied, there was no heap allocation involved. There were no references involved in any of this. So, there’s no reference counting overhead for our point struct.

What about a more complicated struct, though?

Let’s assume I have a label struct that contains text which is of type String and font of type UIFont. String, as we heard earlier, actually stores the contents of its characters on the heap. So, that needs to be reference counted. And font is a class. And so that also needs to be reference counted. If we look at our memory representation, labels got two references. And when we make a copy of it, we’re actually adding two more references, another one to the text storage and another one to the font. The way Swift tracks these heap allocations is by adding calls to retain and release.

So, with the instances of label struct, we see the label is actually going to be incurring twice the reference counting overhead that a class would have.

In summary, because classes are allocated on the heap, Swift has to manage the lifetime of that heap allocation. It does so with reference counting. This is nontrivial because reference counting operations are relatively frequent and because of the atomicity of the reference counting.

If structs contain references, they’re going to be paying reference counting overhead as well.

🚀 In fact, structs are going to be paying reference counting overhead proportional to the number of references that they contain. So, if they have more than one reference, they’re going to retain more reference counting overhead than a class.


struct Attachment {
    let fileURL: URL
    let uuid: String
    let mimeType: String

    init?(fileURL: URL, uuid: String, mimeType: String) {
        guard mimeType.isMimeType else { return nil }

        self.fileURL = fileURL
        self.uuid = uuid
        self.mimeType = mimeType

Above, there is a lot of reference counting overhead and if we actually look at our memory representation of this struct, all 3 of our properties are incurring reference counting overhead when you pass them around. Because there are references to heap allocations underlying each of these structs.

Improve better:

enum MimeType: String {
    case png, jpeg, gif

struct Attachment {
    let fileURL: URL
    let uuid: UUID // It is now a struct
    let mimeType: MimeType // It is now an enum

    init?(fileURL: URL, uuid: UUID, mimeType: MimeType) {
        guard let mimeType = MimeType(rawValue: mimeType) else { return nil }

        self.fileURL = fileURL
        self.uuid = uuid
        self.mimeType = mimeType

🚀 For UUID, which is really great because it stores those 128 bits inline directly in the struct. And so let’s use that. What this is going to do is, it’s going to eliminate any of the reference counting overhead we’re paying for that UUID field, the one that was a String. And we’ve got much more type safety because I can’t just put anything in here. I can only put a UUID. That’s fantastic. Let’s take a look at mimeType and let’s look at how I’ve implemented this isMimeType check. I’m actually only supporting a closed set of mimeTypes, JPG, PNG, and GIF.

🚀 Swift has a great abstraction mechanism for representing a fixed set of things. And that’s an enumeration. So, I’m going to take that switch statement, put it inside a failable initializer and map those mimeTypes to the appropriate case in my enum. So, now I’ve got more type safety with this mimeType enum and I’ve also got more performance because I don’t need to be storing these different cases indirectly on the heap. Swift actually has a really compact and convenient way of writing this exact code, which is using an enum that’s backed by a rawString value. And so this is effectively the exact same code except it’s even more powerful, has the same performance characteristics, but it’s way more convenient to write. So, if we looked at our attachment struct now, it’s way more type-safe. We’ve got a strongly typed UUID and mimeType field and we’re not paying nearly as much reference counting overhead because UUID and mimeType don’t need to be reference counted or heap allocated.

Method Dispatch

When you call a method at runtime, Swift needs to execute the correct implementation.

If it can determine the implementation to execute at compile time, that’s known as a static dispatch. And at runtime, we’re just going to be able to jump directly to the correct implementation. And this is really cool because the compiler actually going to be able to have visibility into which implementations are going to be executed. And so it’s going to be able to optimize this code pretty aggressively including things like inlining.

Static Dispatch

  • Jump directly to implementation at runtime.
  • Candidate for inlining and other optimizations.

Dynamic Dispatch

  • Look up implementation in the table at runtime.
  • Then jump to implementation.
  • Prevents inlining and other optimizations.

At dynamic dispatch, we’re not going to be able to determine a compile time directly which implementation to go to. And so at runtime, we’re actually going to look up the implementation and then jump to it. So, on its own, a dynamic dispatch is not that much more expensive than a static dispatch. There’s just one level of indirection. None of this thread synchronization overhead is like we had with reference counting and heap allocation.

🚀 But this dynamic dispatch blocks the visibility of the compiler. So while the compiler could do all these really cool optimizations for our static dispatches, at a dynamic dispatch, the compiler is not going to be able to reason through it.

What is inlining?

Let’s return to our familiar struct point.

struct Point {
    var x, y: Double
    func draw() {
        // Point.draw implementation

func drawAPoint(_ param: Point) {

let point = Point(x: 0, y: 0)
// When we call via drawAPoint function, compiler knows exactly which implementations are going to be executed 
// and so it's actually going to take our drawAPoint dispatch 
// and it's just going to replace that with the implementation of drawAPoint.

how compiler see above:
let point = Point(x: 0, y: 0)
// Point.draw implementation // Directly inside the draw() function

The drawAPoint function and the point.draw() method are both statically dispatched. What this means is that the compiler knows exactly which implementations are going to be executed and so it’s actually going to take our drawAPoint dispatch and it’s just going to replace that with the implementation of drawAPoint.

And then, it’s going to take our point.draw() method and, because that’s a static dispatch, it can replace that with the actual implementation of draw() function. So, when we go and execute this code at runtime, we’re going to be able to just construct our point, run the implementation, and we’re done. We didn’t need the overhead of those two static dispatches and the associated setting up of the call stack and tearing it down. So, this is really cool. And this gets to why static dispatches and how static dispatches are faster than dynamic dispatches.

🚀 A single static dispatch compared to a single dynamic dispatch, there isn’t that much of a difference, but in a whole chain of static dispatches, the compiler is going to have visibility through that whole chain. Whereas the chain of dynamic dispatches is going to be blocked at every single step from reasoning at a higher level without it. And so the compiler is going to be able to collapse a chain of static method dispatches just like into a single implementation with no call stack overhead. So, that’s really cool.

So, why do we have this dynamic dispatch thing at all?

It enables really powerful things like polymorphism. If we look at a traditional object-oriented program here with a drawable abstract superclass, I could define a point subclass and a line subclass that overrides draw with their own custom implementation. And then I have a program that can polymorphically create an array of drawables. Might contain lines. Might contain points. And it can call draw on each of them.

class Drawable { func draw() {} }

class Point: Drawable {
    var x, y: Double
    override func draw() {

class Line: Drawable {
    var x1, y1, x2, y2: Double
    override func draw() {

var drawables: [Drawable]
for d in drawables {

Because drawable, point, and line are all classes, we can create an array of these things and they’re all the same size because we’re storing them by reference in the array. And then when we go through each of them, we’re going to call draw on them. So, we can understand why the compiler can’t determine at compile time which is the correct implementation to execute.

Because this d.draw, it could be a point, it could be a line. They are different code paths.

So, how does it determine which one to call? Well, the compiler adds another field to classes which is a pointer to the type information of that class and it’s stored in static memory. And so when we go and call draw, what the compiler actually generates on our behalf is a lookup through the type to something called the virtual method table on the type and static memory, which contains a pointer to the correct implementation to execute. And so if we change this d.draw to what the compiler is doing on our behalf, we see it’s actually looking up through the virtual method table to find the correct draw implementation to execute. And then it passes the actual instance as the implicit self-parameter.

So, what have we seen here?

Classes by default, dynamically dispatch their methods. This doesn’t make a big difference on its own, but when it comes to method chaining and other things, it can prevent optimizations like inlining and that can add up.

🚀 Not all classes, though, require dynamic dispatch. If you never intend for a class to be subclassed, you can mark it as final to convey to your fellow teammates and to your future self that that was your intention. The compiler will pick up on this and it’s going to statically dispatch those methods. Furthermore, if the compiler can reason and prove that you’re never going to be subclassing a class in your application, it’ll opportunistically turn those dynamic dispatches into static dispatches on your behalf.

Whenever you’re reading and writing Swift code, you should be looking at it and thinking,

  • Is this instance going to be allocated on the stack or the heap?
  • When I pass this instance around, how much reference counting overhead I’m going to incur?
  • When I call a method on this instance, is it going to be statically or dynamically dispatched?
    If we’re paying for dynamism we don’t need, it’s going to hurt our performance.


Mirroring and Reflection in Swift

Reflection: Is a form of meta programming that allows you to extract information from data structure in runtime

Swift’s version of reflection enables us to iterate over, and read the values of, all the stored properties that a type haswhether that’s a struct, a class, or any other type — enabling a sort of meta programming that can enable us to write code that actually interacts with the code itself.

There is a Mirror struct, if you call on an object, it will give you an idea if it is class, struct, enum, tuple. But it does not work on all types for example closures. If it can not tell you what the type is, it will return nil.

There is a child property which has label and values. You can recursively go through the properties of a data structure and be able to sort of go down in a tree and really to figure out everything that is going on.


SwiftLint is a tool to enforce Swift style and conventions. It is developed by Realm.


In this page, we will install SwiftLint via Homebrew.

Open terminal and run

$ brew install swiftlint

Once installed, navigate to your project folder in terminal and create .swiftlint.yml file:

$ touch .swiftlint.yml

swiftlint.yml file is where we define how the linter configuration will be. In the configuration file, you can add, disable or update the linting rules.

You can see a sample of this file’s structure at here.

Rule inclusion in .swiftlint file:

  • disabled_rules: Disable rules from the default enabled set.
  • opt_in_rules: Enable rules not from the default set.
  • only_rules: Only the rules specified in this list will be enabled. Cannot be specified alongside disabled_rules or opt_in_rules.
  • analyzer_rules: This is an entirely separate list of rules that are only run by the analyze command. All analyzer rules are opt-in, so this is the only configurable rule list, there are no equivalents for disabled_rules only_rules.

Open the .swiftlint.yml file to edit:

$ open .swiftlint.yml

After you have configured the .swiftlint.yml file, save it and close.

Then navigate into your project – XCode Target > Build Phases > + icon > Create New Run Script Phase

Add below script:

if which swiftlint >/dev/null; then
  echo "warning: SwiftLint not installed, download from"

Place your build script in top of Build Phases, Then build your project.

In my example project below, SwiftLint’s script is the second one.

There is a built-in feature in SwiftLint by which can correct some type of violations automatically.
Run below command in the root directory of your iOS project to autocorrect:

$ swiftlint autocorrect

When you run above command files on disk are overwritten with a corrected version. So to be aware that you have backups of these files before running, swiftlint autocorrect in case some datas can be lost.

Now that you have configured your linter in XCode, you may want to check some common Swift Style guides as well.

Run swift lint in custom iOS file

  • add your file location in -included tag:
  - Source
  - Tests
  - ../

Here is my custom .swiftlint file.

Here you can find the source code and learn more about SwiftLint.

Closures in Swift

Closures are blocks of code that defines functionality in your code. Closures can capture and store references to any constants and variables from the context. Basically, swift handles all of the memory management of capturing. Yet, when building hierarchies of objects and closures you have to be very careful in considering which type that will act as the owner for each given instances. With a clear hierarchy, parent instances will be responsible in retaining their children in which, children instances will be “weak” references, and a lot of memory related problems will be avoided.

@Escaping: When the passing argument defined in outside of the closure and will be executed later this argument must be defined as @escaping. When the execution ends, the scope of the passed closure will exist in the memory till it gets executed. In such, asynchronous functions like waiting API response, or calling Grand Central Dispatch – like doing animation, will cause the @escaping argument to be stored in memory until the animation is done, or api delivers response. In those cases, we must explicitly  define self’s as “weak” properties to avoid memory issues.

@Non-Escaping: When the passing argument is already a in a function, closure executes the argument in the function itself and returns it to back the compiler. In those cases, once the execution ends, the passed closure goes out of scope and have no more existence in memory. (Default one)

Memory Management in Swift

Swift uses ARC memory management model.

Retain Cycles Problem: When two objects reference each other, or when capturing in closures may cause a retain cycles.

1. Referencing object increments object’s retain count: For example, lets say we have a Stationery.swift class and Notebook.swift class, and both these classes includes object of one another. Imagine that we have both instances of these classes, in that case retain cycles occur. For solution, we must break the retain cycle by making notebook’s instance to “weak”.

2. Closures: Another example could be about closures, just like how referencing an instance using a property increment its retain count, so does capturing instance in closure.
For example, if we are using a closure to observe a notebook instance whenever its being sold in stationery object, and using the same stationery object within that closure will again cause the retain cycles.

Publishing Libraries on Jitpack

Publishing Single Library on Jitpack

JitPack is a novel package repository for JVM and Android projects. It builds Git projects on demand and provides you with ready-to-use artifacts (jar, aar).
1. Create a new android project.
2.Add your library by File -> New -> Import a module.
3.Add the JitPack maven repository to the list of repositories:
repositories {
jcenter() maven { url "" }

Note: when using multiple repositories in build.gradle it is recommended to add JitPack at the end. Gradle will go through all repositories in order until it finds a dependency.
4. Then publish your new project to your GitHub account. Then, push your first version tag.
Note: You have to give permission from your account to jitpack.
single library example

Publishing Multiple Libraries on Jitpack

1. Create a new android project.
2. Add your libraries one by one by File -> New -> Import a module.
3. Add the JitPack maven repository to the list of repositories:
repositories {
jcenter() maven { url "" }

Note: when using multiple repositories in build.gradle it is recommended to add JitPack at the end. Gradle will go through all repositories in order until it finds a dependency.
4. Then publish your new project to your GitHub account. Then, push your first version tag.
Note: You have to give permission from your account to jitpack.
example for multiple libraries

Important Notes: 🚀
Lets assume you have module1, module2, module3 in your android project.
To share only one module, you can add:
implementation ‘com.github.yourproject:module1:module1’sTAG'
Or to share all your project directly:

If you are using apply plugins for such as androidanalyser, the developer who wants to use your project must embed it in her/his app. So be sure, that you embed as few plugins as possible.

Lastly, jitpack ignores files according to .gitignore file. So be sure you added necessary files to .gitignore.


“Defines whether widgets contained in this layout are baseline-aligned or not.”

By setting android:baselineAligned="false" , app prevents the layout from aligning its children’s baselines, that means app doesn’t worry about where the baseline of other elements in the layout, which increases the UI performance.

Note: By default, baselineAligned is set to true.

TDD In Practise

When it comes to Testing In Swift, there are 3 keys,

1 – Design your code for clear testability.

Unified Input & Output: In the functional programming world there is this talk about pure function which simply means that same input will always produce the same output no matter where it is called. You don’t have to become Haskell programmers and change all your code according to pure functionality, yet, you may want to inspire by them to make your code easier to test.

Keep Our State Local: In the Apple Community a lot of people are using singleton patters simply because they are used to. We have seen Apple do it like, UIScreen, UIDevice, Bundle, etc. So we as IOS Devs, tend to use it by default even when we really don’t need it. As singletons can be nice for sharing some apis they can also lead you a dangerous paths such as undefined state. So before using it we must think that whether we really need it or not. In summary, my humble suggestion would be try to keep states in local so that we are gonna end up with less bugs and also easier to test the code.

Dependency Injection: When you are testing a state or functionality, all the needs to create output must be declared as you call the testing function, so try not to put variables in the class itself but try to create them while calling the function via putting those variables in initializers. So that, all the needs will always generate the same output with the same input no matter when or where.

2 – Remember that you are going to write a test against all your public api. So keep caution on access modifiers.

Always keep going with framework oriented programming, so that your codes will be easy to test and easy to reuse them.

3 – We all need mocks when it comes to testing, yet be very careful while using them. Remember your only purpose is writing tests to check your real code. Mocks come with a cost and more complexity. And you can end up instead of testing your api, testing too much of your implementation.