Network models

Constructing a network model is a way to understand the formation process of the network, and thereby also shed light on the function of the network. A network model is nothing but a set of rules that in a static or dynamic way in the end specifies what connects to what. A network model can also create generic classes of networks, which can be used as playgrounds for dynamic models on networks. For example, to thoroughly examine vaccination strategies on different network topologies.

All network models are originally promoted by empirical studies. Observed topological features, such as short path-length, high clustering, and nonrandom degree distributions, have been followed by models that reproduce the desired attributes in an attempt to uncover their dynamic origin (Barabási and Albert1999Klemm and Eguíluz2002Watts and Strogatz1998). We learned from the quotation of Sylvie and Bruno Concluded by Lewis Carroll that a good model is a simple model, and details are therefore by necessity left out. This is the essence of the different classes of network models presented in this chapter.

Simple networks

A lattice is the simplest possible interaction pattern (Figure 6(a)). The extent to which it has been used in statistical physics, to model dynamic systems, can not be overestimated. In fact, many systems can be accurately modeled by having their parts interacting on a lattice layout. For example, magnets are accurately modeled by interactions on a lattice in the Ising model (Ising1925), simply because the lattice is a good representation of the real interaction pattern. A lattice is also a good choice if the scope is to understand the intrinsic features of a model. The results can in some cases be derived analytically, since the effect of the interaction pattern is the simplest possible. However, a lattice is not a good network model when the effect of the interaction pattern itself plays an important role, since it is a bad representation of typical interactions in the real-world (Watts and Strogatz1998).

Random networks


Figure 6: Four different network models, all designed to mimic real-world networks. (a) A 2-dimensional lattice. (b) n = 24 nodes and m = 60 links corresponding to p = 0.14 in the random network model by Erdős and Rényi. The average degree is consequently k = 4. (c) The small-world model by Watts and Strogatz; 24 nodes, each connected to the first and second nearest neighbor in the one-dimensional ring. 4 links are rewired, which corresponds to a probability of rewiring p = 0.08. (d) A scale-free network with degree distribution P(k) k2.2.

Random networks, or ER-networks after Erdös and Rényi (1959), briefly discussed in and illustrated in Figure 6(b), should not to be mixed up with the MS-randomized networks in Figure 3. ER-networks play a central role in the study of complex networks because the underlying mechanisms of organization in nature is sometimes random (Barabási and Albert2002). The advantage with the ER-model presented below is, thanks to its simplicity, that many properties can be analyzed analytically.

(i) The total number, n, of nodes is fixed;
(ii) The probability that two arbitrary nodes are connected is p.

The average number of links is consequently

m = pn(n 1)2. (4)

The degree distribution is binomial

P(k) = n 1 k pk(1 p)n1k, (5)

since for a certain node the probability of k links is pk, the probability of absence of extra links is (1 p)n1k and there are n1 k possible end nodes for the k links. For large n, P(k) takes the form

P(k) = ekkkk!, (6)

which is the narrow Poisson distribution, peaked at the average degree

k = p(n 1) pn. (7)

Random networks have been extensively studied and one conclusion is that for k ln(n) almost every network is connected and the average shortest path is (Barabási and Albert2002Dorogovtsev and Mendes2002)

lrand ln n lnk. (8)

The clustering coefficient is

Crand = p k n , (9)

since the probability that two of a node’s neighbors are connected is p.

To study feature deviations in a network from an ER-network, random ER-counterparts of the network can be generated in a similar way as the MS-randomized networks. A link is chosen randomly together with two unique nodes that become the new endpoints of the rewired link. This procedure washes away all nonrandom properties, also associated to the degree distribution. However, we seldom use this strategy since it has the same disadvantage as the definition of the clustering coefficient in Equation (3), in that it disregards the extensive effects the degree distribution itself has on many properties.

The ER-networks share the short average path length, Equation (8), with networks in nature, but for large networks the clustering coefficient, Equation (9), is negligible. This is an unrealistic property, since networks in nature are characterized by a high clustering coefficient.

Small-world networks

The concept of a small world dates from the famous experiment by Milgram (1967). The social psychologist Stanley Milgram sent letters to a few hundred randomly selected individuals in Nebraska and Kansas, USA, with the instruction to the receiver to reach one of two target persons in the Boston area (Kleinberg2000Milgram1967). The recipients were told to send the letter to a person they knew on first-name basis, an acquaintance, who they thought was more likely to know the target than they were themselves. Subsequent recipients followed the same procedure. The information given about the target was their name, address, and occupation. The letters were traced by requesting that each participant should send Milgram a postcard. The result was that the average length of the resulting acquaintance chains was about six. This is popular known as the principle of six degrees of separation and was the first evidence that the world is indeed small (Watts2003).8

Random networks have the small world property, due to the typical node–node distance l ln n lnk. However, the clustering coefficient C = kn is much smaller than for most real-world networks (Newman2002b). One of the first models with both the property of high clustering coefficient and the small-world effect was the small-world model of Watts and Strogatz (1998), also known as the WS-model. Ball et al. (1997) presented at the same time a very similar model in the context of spreading of epidemics.

In the WS-model, the network is created from a one-dimensional lattice with periodic boundary conditions (see Figure 6(c)). Every node is not only connected to the nearest neighbor, but also to at least one next-nearest neighbor in the one-dimensional lattice. In this way, a node has four or more nearest neighbors and not only two, which would result in a vanishing clustering coefficient. Then, one by one, every link is rewired with probability p in such a way that long range connections appear. In other words, many links connect locally and a few links are used for long range connections as in Figure 6(c). The model is therefore a way to go from lattice models to random networks by adjusting the parameter p. For large p the WS-model is the same as the random network. But even for small probabilities of rewiring, where the local properties are almost unchanged, the average shortest path length is of the order of that for random networks. The short-cut links might represent acquaintances one has in other states or countries, explaining the short acquaintance chains in the experiment by Milgram.

The WS-model was one of the first models that could simultaneously explain high clustering and small-world effects. However, the discovery of broad degree distributions in real-world networks ignited a burst of new models to explain this feature (Barabási and Albert1999Caldarelli et al.2002Faloutsos et al.1999Goh et al.2001).

Scale-free networks

Today information about different networks in nature is becoming increasingly available (Barabási et al.1999). The small-world effect and high clustering in real world networks, which have long been assumed to exist, have now been validated. Much more unpredicted was the recent discovery that the degree distribution of at least several large (growing) networks in nature followed a power law (Barabási and Albert1999Faloutsos et al.1999) (Figure 6(d)). As mentioned before, the power-law distribution has been reported for the Internet (Faloutsos et al.1999), the WWW (Albert et al.1999), some molecular networks (Jeong et al.2000), as well as for the size distributions of industrial companies (Zajdenweber1995) and much earlier for citation networks (Price1965). However, the Zipf law has been known for a long time in a nonnetwork context, for example that the rank of word frequencies, city sizes and income distributions follow a power law (Zipf1949).

The power-law distribution of degrees k is given by

P(k) kγ, (10)

where γ is approximately between 2 and 3 in real-world networks (Dorogovtsev and Mendes2002). Barabási and Albert (1999) called such networks scale free, since they are free of a characteristic scale—they are scale invariant. In other words, the average degree k is an insignificant quantity in a power-law distribution. Contrary, a node in a network with an exponential or Poisson degree-distribution typically has degree k k. The main feature of scale-free networks, is that each node has a statistically significant probability of having a very high degree compared with the average degree k of the network. These highly connected nodes dominate the topology of the network by forming hubs, which is not the feature of random or small-world networks (see Figure 6).

The Barabási-Albert Model

Barabási and Albert (1999) argued that the scale-free nature of these networks originated from two generic features of many real-world networks: First, most real networks are open systems that grow by the addition of nodes. Second, the nodes are not linked randomly. There is a preferential attachment, which favors attachment to nodes with a high degree. This manifests itself in facts like “rich gets richer” and that being popular is attractive. As was mentioned in , the idea was originally modeled in a nonnetwork version by Simon (1955) and in a network version by Price (1976) to explain the power-law degree distributions of citation networks (Price1965). However, I will here present the Barabási-Albert (BA)-model (Barabási and Albert2002), which contains three elements:

Initial condition: The network consists of a small number n0 of nodes and no links.
Growth: At every time step, a new node with m < n0 links is added, linking to m different nodes already present in the system.
Preferential attachment: The probability P that the new node will be connected to node i depends linearly on the degree ki.

The power-law exponent γ is 3 in this model, independent of m (Barabási and Albert1999).

The continued attempts to find better models are driven by the discovery of new topology properties of real-world networks. For example, the model introduces correlations among the nodes (Grönlund et al.2005) and other unnatural features, such as a vanishing clustering coefficient as n (Barabási and Albert2002). The fact that the power-law degree distribution only arises for linear attachment (Krapivsky and Redner2002), questions the model’s generality. Another unappealing feature of the BA-model is that the networks are in some way dead, as it is only a slight generalization of Simon’s nonnetwork model in a network context. This means that if the model is a good description of the evolution of networks in nature, then the question “Who is connected to whom?” looses its value, since only the degree of the nodes counts.

Also, many networks in nature with a broad degree distribution show deviations from a pure power-law, typically an exponential cutoff at high degrees

P(k) kγφ(kξ), (11)

where φ(kξ) is the cutoff at some scale ξ. In the context of the growing BA-model, this phenomenon can be explained if an aging of nodes is introduced (Barabási and Albert2002). The effect is that old nodes that are added early in the network-evolution process, lose some of their attractive power. This limits the ability for nodes with a very high degree to evolve.

Despite my criticism, I accept without question the necessary impact on the network community from the preferential-attachment model. In particular, attention has shifted to growing networks to capture the features of real-world networks since the BA-model was introduced.

The BA-model belongs, together with the ER-model and the WS-model, to the first generation of simple models. However, local and global optimization models inspired by evolutionary processes, have been developed in the field of biology (West et al.1997). Local optimization models of nongrowing networks have, on the other hand, scarcely been investigated. The merging model presented below belongs to this class and in my thesis I present an social communication approach to the optimization problem.


Information transfer is the fundamental task of many networks in nature. In these networks, some of the nodes receive signals from outside and then transfer these signal to other nodes, which then take appropriate action. Information networks include for example neural networks with their synaptic rewiring (Cohen-Cory2002), molecular networks evolved to modulate the protein production in living cells (Jeong et al.2000Maslov and Sneppen2002), and social networks exemplified by the Internet (Faloutsos et al.1999). Networks where material or energy is transmitted between nodes, as for example business networks or ecosystems, often also contain an inherent ingredient of information transfer. Therefore, a natural source of inspiration to invent new models is to consider real-world networks in the perspective of information transfer.


Figure 7: The core in the merging model: To the right two computing units in a network merge to a larger unit. The reason could be economical, based on efficiency or performance. This is a likely process in for example the network of autonomous systems of the Internet (modeled to the left by a scale-free network with P(k) k2.2), where companies that own the autonomous systems merge, mainly for financial reasons.

Every action or process in and between the subunits of a system, here simplified and represented by nodes, is more or less complicated and time consuming. Is there a way to optimize the system, locally or globally, so that the overall functionality increases? Two appealing modifications of the network would be better access between nodes (shorter shortest paths) and more efficient and possibly larger computing units. This inspired Kim et al. (2005) to introduce the idea of merging-and-creation on networks, exemplified in Figure 7 by the network of autonomous systems9 of the Internet.10 One version of the merging and regeneration model is:

The regeneration step exists to keep the network alive with a fixed number of nodes. The model presented above gives rise to a power-law degree distribution P(k) kγ with a slope γ between 1.5 and 3 depending on variations of the model.

Figure 8: One realization of the merging algorithm with different sign in (a) and equal sign in (b).

In Minnhagen et al. (2004) we investigated different variations of the model and primarily the effect on the degree distribution. We also considered a nonnetwork implementation of the model, to be able to treat the model analytically. Alava and Dorogovtsev (2005) considered later a simplified network implementation analytically. Instead of merging nodes we let elements qi without any links merge, in a large system (i = 1, 2,....,n). The scalar may be either just positive (Field and Saslow1965) (Figure 8(b)) or both positive and negative (Krapivsky1993Takayasu1989) (Figure 8(a)). To get a real-world idea about the simplified model, consider the elements as particles and the scalar as the mass of a particle in a model of dust aggregation in space (Field and Saslow1965). Other examples are companies and the financial assets of the company, and the vorticity of vortices studied in the context of superconductors and turbulence.

In the core of the process two randomly chosen elements, i and j, merge and the new element acquires the sum of the scalars qi + qj. In parallel, there is a creation process of elements with small size q, both positive and negative. I emphasize that the size of the created elements is not essential for the dynamics, since the size of a new element can be drawn from a narrow distribution or be assigned deterministic values. Obviously there are many variants of this process, but they can be classified into three main classes according to the scaling of the size distribution P(q) qγ of the elements. (see Table 1).

Table 1: Three different classes of the merging. In the first model all elements q 0, but in the second model the elements q can be both positive or negative. In the third model a condition for the merging-and-creation event is that q 0 and r is a random number from a narrow distribution r = 1

merging qi qi + qj qi qi + qj qi qi + qj 1
creation qj 1 qj ±1 qj r [0, 2]

In Sneppen et al. (2004) we studied one of the models in detail, the γ = 2 version, and returned to a network implementation.11 The process we studied is the completely ±-symmetric case. A link has a positive end connected to a donor node (+ node) and a negative end connected to an acceptor ( node). Interesting with this model is the close connection to the magnetic network of flux tubes on the sun and the associated size distribution of solar flares and sun spots (Hughes et al.2003Lu and Russell1991) (see Figure 9). The model is:

Figure 9: Merging (a) and annihilation (b) in a model of the magnetic network of flux tubes on the sun. Positive nodes (donors) have outgoing links and negative nodes (acceptors) have incoming links. This ±-symmetric dynamics give rise to a scale-free size distribution of both donors and acceptors (P(q) q2) and energy bursts scaling as P(s) s3, with s being the size of a burst.

This is a minimalistic model of the more complicated 2D-model presented by Hughes et al. (2003). Nevertheless, it gives the same qualitative results. The degree distribution is P(q) q2 both for donors and acceptors and the size distribution of the solar flares is P(s) s3. With the degree of a node interpreted as the size of a magnetic flux line, the results agree with other modeling (Hughes and Paczuski2003Lu and Russell1991). However, the size distribution of solar flares measured by Dennis (1985); Lin et al. (1984) follows a power law P(s) s1.7±0.3 that disagrees with our result. According to Lu and Russell (1991), the explanation is that the annihilation of solar flares triggers an avalanche of events that makes the size distribution of the solar flares broader.

The advantage with our model is that it can be treated and understood analytically. Its simplistic form also opens for a coarse understanding of the fundamental mechanisms of some dynamic systems, not necessarily limited to sun processes in the magnetic corona. The total dominance of preferential attachment in the network community has overshadowed and possibly hindered other dynamic models to be investigated in detail. I believe that merging is one of the underestimated mechanisms that must be investigated further in the search for a complete understanding of the connection between the evolution of networks and their structure and function.

Beyond simple models

As already mentioned, there are numerous other models that produce power-law degree distributions. However, instead of giving examples I would like to mention a model that does not yet exist.

The tolerance is the inherent robustness of a network (Albert et al.2000). An error is defined as the successive removal of randomly chosen nodes and an attack as the removal of a few nodes that play a vital role in maintaining the network’s connectivity. Vital nodes are those with the highest degree. Compared with random networks the tolerance against random failures, that is errors, is high in scale-free networks. The reason is that a high percentage of the nodes have just a few connections in a scale-free network. However, the price for a high error-tolerance is the vulnerability to attacks; when only a few of the dominating nodes are removed the network falls apart.

The question is therefore, what does the topology look like in the scale-free network that is optimized against attacks (Boccaletti et al.2006)? A better understanding is of crucial importance to ensure the safety, capacity, performance and to use all of the possibilities of man-made structures like the Internet and WWW (Dorogovtsev and Mendes2002).

8 Although this was the first real evidence that the world is small, the concept was not new. Frigyes Karinthy published 1929 Chains, where he argued that everyone on earth is at most five acquaintances away from anyone else (Newman et al., 2006).

9 An autonomous system is a collection of Internet-protocoll networks and routers with a common routing policy under the control of typically an Internet service provider.

10 A simulation is available on

11 A simulation is available on