Network models
Constructing a network model is a way to understand the formation process of the network, and thereby also shed light on the function of the network. A network model is nothing but a set of rules that in a static or dynamic way in the end specifies what connects to what. A network model can also create generic classes of networks, which can be used as playgrounds for dynamic models on networks. For example, to thoroughly examine vaccination strategies on different network topologies.
All network models are originally promoted by empirical studies. Observed topological features, such as short pathlength, high clustering, and nonrandom degree distributions, have been followed by models that reproduce the desired attributes in an attempt to uncover their dynamic origin (Barabási and Albert, 1999; Klemm and Eguíluz, 2002; Watts and Strogatz, 1998). We learned from the quotation of Sylvie and Bruno Concluded by Lewis Carroll that a good model is a simple model, and details are therefore by necessity left out. This is the essence of the different classes of network models presented in this chapter.
Simple networks
A lattice is the simplest possible interaction pattern (Figure 6(a)). The extent to which it has been used in statistical physics, to model dynamic systems, can not be overestimated. In fact, many systems can be accurately modeled by having their parts interacting on a lattice layout. For example, magnets are accurately modeled by interactions on a lattice in the Ising model (Ising, 1925), simply because the lattice is a good representation of the real interaction pattern. A lattice is also a good choice if the scope is to understand the intrinsic features of a model. The results can in some cases be derived analytically, since the effect of the interaction pattern is the simplest possible. However, a lattice is not a good network model when the effect of the interaction pattern itself plays an important role, since it is a bad representation of typical interactions in the realworld (Watts and Strogatz, 1998).
Random networks
Random networks, or ERnetworks after Erdös and Rényi (1959), briefly discussed in and illustrated in Figure 6(b), should not to be mixed up with the MSrandomized networks in Figure 3. ERnetworks play a central role in the study of complex networks because the underlying mechanisms of organization in nature is sometimes random (Barabási and Albert, 2002). The advantage with the ERmodel presented below is, thanks to its simplicity, that many properties can be analyzed analytically.
 (i) The total number, n, of nodes is fixed;
 (ii) The probability that two arbitrary nodes are connected is p.
The average number of links is consequently
Random networks have been extensively studied and one conclusion is that for 〈k〉≳ ln(n) almost every network is connected and the average shortest path is (Barabási and Albert, 2002; Dorogovtsev and Mendes, 2002)
The clustering coefficient is
since the probability that two of a node’s neighbors are connected is p.To study feature deviations in a network from an ERnetwork, random ERcounterparts of the network can be generated in a similar way as the MSrandomized networks. A link is chosen randomly together with two unique nodes that become the new endpoints of the rewired link. This procedure washes away all nonrandom properties, also associated to the degree distribution. However, we seldom use this strategy since it has the same disadvantage as the definition of the clustering coefficient in Equation (3), in that it disregards the extensive effects the degree distribution itself has on many properties.
The ERnetworks share the short average path length, Equation (8), with networks in nature, but for large networks the clustering coefficient, Equation (9), is negligible. This is an unrealistic property, since networks in nature are characterized by a high clustering coefficient.
Smallworld networks
The concept of a small world dates from the famous experiment by Milgram (1967). The social psychologist Stanley Milgram sent letters to a few hundred randomly selected individuals in Nebraska and Kansas, USA, with the instruction to the receiver to reach one of two target persons in the Boston area (Kleinberg, 2000; Milgram, 1967). The recipients were told to send the letter to a person they knew on firstname basis, an acquaintance, who they thought was more likely to know the target than they were themselves. Subsequent recipients followed the same procedure. The information given about the target was their name, address, and occupation. The letters were traced by requesting that each participant should send Milgram a postcard. The result was that the average length of the resulting acquaintance chains was about six. This is popular known as the principle of six degrees of separation and was the first evidence that the world is indeed small (Watts, 2003).^{8}
Random networks have the small world property, due to the typical node–node distance l ~ ln n∕ ln〈k〉. However, the clustering coefficient C = 〈k〉∕n is much smaller than for most realworld networks (Newman, 2002b). One of the first models with both the property of high clustering coefficient and the smallworld effect was the smallworld model of Watts and Strogatz (1998), also known as the WSmodel. Ball et al. (1997) presented at the same time a very similar model in the context of spreading of epidemics.
In the WSmodel, the network is created from a onedimensional lattice with periodic boundary conditions (see Figure 6(c)). Every node is not only connected to the nearest neighbor, but also to at least one nextnearest neighbor in the onedimensional lattice. In this way, a node has four or more nearest neighbors and not only two, which would result in a vanishing clustering coefficient. Then, one by one, every link is rewired with probability p in such a way that long range connections appear. In other words, many links connect locally and a few links are used for long range connections as in Figure 6(c). The model is therefore a way to go from lattice models to random networks by adjusting the parameter p. For large p the WSmodel is the same as the random network. But even for small probabilities of rewiring, where the local properties are almost unchanged, the average shortest path length is of the order of that for random networks. The shortcut links might represent acquaintances one has in other states or countries, explaining the short acquaintance chains in the experiment by Milgram.
The WSmodel was one of the first models that could simultaneously explain high clustering and smallworld effects. However, the discovery of broad degree distributions in realworld networks ignited a burst of new models to explain this feature (Barabási and Albert, 1999; Caldarelli et al., 2002; Faloutsos et al., 1999; Goh et al., 2001).
Scalefree networks
Today information about different networks in nature is becoming increasingly available (Barabási et al., 1999). The smallworld effect and high clustering in real world networks, which have long been assumed to exist, have now been validated. Much more unpredicted was the recent discovery that the degree distribution of at least several large (growing) networks in nature followed a power law (Barabási and Albert, 1999; Faloutsos et al., 1999) (Figure 6(d)). As mentioned before, the powerlaw distribution has been reported for the Internet (Faloutsos et al., 1999), the WWW (Albert et al., 1999), some molecular networks (Jeong et al., 2000), as well as for the size distributions of industrial companies (Zajdenweber, 1995) and much earlier for citation networks (Price, 1965). However, the Zipf law has been known for a long time in a nonnetwork context, for example that the rank of word frequencies, city sizes and income distributions follow a power law (Zipf, 1949).
The powerlaw distribution of degrees k is given by
The BarabásiAlbert Model
Barabási and Albert (1999) argued that the scalefree nature of these networks originated from two generic features of many realworld networks: First, most real networks are open systems that grow by the addition of nodes. Second, the nodes are not linked randomly. There is a preferential attachment, which favors attachment to nodes with a high degree. This manifests itself in facts like “rich gets richer” and that being popular is attractive. As was mentioned in , the idea was originally modeled in a nonnetwork version by Simon (1955) and in a network version by Price (1976) to explain the powerlaw degree distributions of citation networks (Price, 1965). However, I will here present the BarabásiAlbert (BA)model (Barabási and Albert, 2002), which contains three elements:
 Initial condition: The network consists of a small number n_{0} of nodes and no links.
 Growth: At every time step, a new node with m < n_{0} links is added, linking to m different nodes already present in the system.
 Preferential attachment: The probability P that the new node will be connected to node i depends linearly on the degree k_{i}.
The powerlaw exponent γ is 3 in this model, independent of m (Barabási and Albert, 1999).
The continued attempts to find better models are driven by the discovery of new topology properties of realworld networks. For example, the model introduces correlations among the nodes (Grönlund et al., 2005) and other unnatural features, such as a vanishing clustering coefficient as n →∞ (Barabási and Albert, 2002). The fact that the powerlaw degree distribution only arises for linear attachment (Krapivsky and Redner, 2002), questions the model’s generality. Another unappealing feature of the BAmodel is that the networks are in some way dead, as it is only a slight generalization of Simon’s nonnetwork model in a network context. This means that if the model is a good description of the evolution of networks in nature, then the question “Who is connected to whom?” looses its value, since only the degree of the nodes counts.
Also, many networks in nature with a broad degree distribution show deviations from a pure powerlaw, typically an exponential cutoff at high degrees
Despite my criticism, I accept without question the necessary impact on the network community from the preferentialattachment model. In particular, attention has shifted to growing networks to capture the features of realworld networks since the BAmodel was introduced.
The BAmodel belongs, together with the ERmodel and the WSmodel, to the first generation of simple models. However, local and global optimization models inspired by evolutionary processes, have been developed in the field of biology (West et al., 1997). Local optimization models of nongrowing networks have, on the other hand, scarcely been investigated. The merging model presented below belongs to this class and in my thesis I presented a social communication approach to the optimization problem.
Merging
Information transfer is the fundamental task of many networks in nature. In these networks, some of the nodes receive signals from outside and then transfer these signal to other nodes, which then take appropriate action. Information networks include for example neural networks with their synaptic rewiring (CohenCory, 2002), molecular networks evolved to modulate the protein production in living cells (Jeong et al., 2000; Maslov and Sneppen, 2002), and social networks exemplified by the Internet (Faloutsos et al., 1999). Networks where material or energy is transmitted between nodes, as for example business networks or ecosystems, often also contain an inherent ingredient of information transfer. Therefore, a natural source of inspiration to invent new models is to consider realworld networks in the perspective of information transfer.

Every action or process in and between the subunits of a system, here simplified and represented by nodes, is more or less complicated and time consuming. Is there a way to optimize the system, locally or globally, so that the overall functionality increases? Two appealing modifications of the network would be better access between nodes (shorter shortest paths) and more efficient and possibly larger computing units. This inspired Kim et al. (2005) to introduce the idea of mergingandcreation on networks, exemplified in Figure 7 by the network of autonomous systems^{9} of the Internet.^{10} One version of the merging and regeneration model is:
 Choose a node i and one of its neighbors j randomly.
 Let i and j merge by absorbing the link between them. Choose one of the nodes as the surviving node (i). By copying all links from j to i that i did not already have (all double links are removed), i has now the same neighbors as i and j had before the merging event.
 Add a new node with k_{new} links to k_{new} nodes in the network. The distribution of k_{new} is not essential for the model.
The regeneration step exists to keep the network alive with a fixed number of nodes. The model presented above gives rise to a powerlaw degree distribution P(k) ∝ k^{γ} with a slope γ between 1.5 and 3 depending on variations of the model.

In Minnhagen et al. (2004) we investigated different variations of the model and primarily the effect on the degree distribution. We also considered a nonnetwork implementation of the model, to be able to treat the model analytically. Alava and Dorogovtsev (2005) considered later a simplified network implementation analytically. Instead of merging nodes we let elements q_{i} without any links merge, in a large system (i = 1, 2,....,n). The scalar may be either just positive (Field and Saslow, 1965) (Figure 8(b)) or both positive and negative (Krapivsky, 1993; Takayasu, 1989) (Figure 8(a)). To get a realworld idea about the simplified model, consider the elements as particles and the scalar as the mass of a particle in a model of dust aggregation in space (Field and Saslow, 1965). Other examples are companies and the financial assets of the company, and the vorticity of vortices studied in the context of superconductors and turbulence.
In the core of the process two randomly chosen elements, i and j, merge and the new element acquires the sum of the scalars q_{i} + q_{j}. In parallel, there is a creation process of elements with small size ∣q∣, both positive and negative. I emphasize that the size of the created elements is not essential for the dynamics, since the size of a new element can be drawn from a narrow distribution or be assigned deterministic values. Obviously there are many variants of this process, but they can be classified into three main classes according to the scaling of the size distribution P(q) ∝ q^{γ} of the elements. (see Table 1).

In Sneppen et al. (2004) we studied one of the models in detail, the γ = 2 version, and returned to a network implementation.^{11} The process we studied is the completely ±symmetric case. A link has a positive end connected to a donor node (+ node) and a negative end connected to an acceptor ( node). Interesting with this model is the close connection to the magnetic network of flux tubes on the sun and the associated size distribution of solar flares and sun spots (Hughes et al., 2003; Lu and Russell, 1991) (see Figure 9). The model is:
 Merge the two random nodes i and j. There are now two possibilities:
(a) If they have the same sign all the links from i and j are assigned to the merged node. Thereby the merged node has the same neighbors as i and j had together prior to the merging (Figure 9(a)).
(b) If i and j have different signs, the resulting node is assigned the sign of the sum q_{i} + q_{j}. Thereby a number max{∣q_{i}∣,∣q_{j}∣}∣q_{i} + q_{j}∣ of links are annihilated in such a way that only the two merging nodes change their number of links. This is done by reconnecting donor nodes of incoming links to acceptor nodes of outgoing links (Figure 9(b)).  One new node is created of random sign, with one link being connected to a randomly chosen node.

This is a minimalistic model of the more complicated 2Dmodel presented by Hughes et al. (2003). Nevertheless, it gives the same qualitative results. The degree distribution is P(q) ∝ q^{2} both for donors and acceptors and the size distribution of the solar flares is P(s) ∝ s^{3}. With the degree of a node interpreted as the size of a magnetic flux line, the results agree with other modeling (Hughes and Paczuski, 2003; Lu and Russell, 1991). However, the size distribution of solar flares measured by Dennis (1985); Lin et al. (1984) follows a power law P(s) ∝ s^{1.7±0.3} that disagrees with our result. According to Lu and Russell (1991), the explanation is that the annihilation of solar flares triggers an avalanche of events that makes the size distribution of the solar flares broader.
The advantage with our model is that it can be treated and understood analytically. Its simplistic form also opens for a coarse understanding of the fundamental mechanisms of some dynamic systems, not necessarily limited to sun processes in the magnetic corona. The total dominance of preferential attachment in the network community has overshadowed and possibly hindered other dynamic models to be investigated in detail. I believe that merging is one of the underestimated mechanisms that must be investigated further in the search for a complete understanding of the connection between the evolution of networks and their structure and function.
Beyond simple models
As already mentioned, there are numerous other models that produce powerlaw degree distributions. However, instead of giving examples I would like to mention a model that does not yet exist.
The tolerance is the inherent robustness of a network (Albert et al., 2000). An error is defined as the successive removal of randomly chosen nodes and an attack as the removal of a few nodes that play a vital role in maintaining the network’s connectivity. Vital nodes are those with the highest degree. Compared with random networks the tolerance against random failures, that is errors, is high in scalefree networks. The reason is that a high percentage of the nodes have just a few connections in a scalefree network. However, the price for a high errortolerance is the vulnerability to attacks; when only a few of the dominating nodes are removed the network falls apart.
The question is therefore, what does the topology look like in the scalefree network that is optimized against attacks (Boccaletti et al., 2006)? A better understanding is of crucial importance to ensure the safety, capacity, performance and to use all of the possibilities of manmade structures like the Internet and WWW (Dorogovtsev and Mendes, 2002).
This is the end of this introduction. If you are interested in reading further about the communication aspects of complex networks you can continue to read in my thesis.
^{8}Although this was the first real evidence that the world is small, the concept was not new. Frigyes Karinthy published 1929 Chains, where he argued that everyone on earth is at most five acquaintances away from anyone else (Newman et al., 2006).
^{9}An autonomous system is a collection of Internetprotocoll networks and routers with a common routing policy under the control of typically an Internet service provider.
^{10}A simulation is available on http://cmol.ni.dk/javaapp.php
^{11}A simulation is available on http://cmol.nbi.dk/javaapp.php
Martin Rosvall Associate professor +46 70 239 1973 Department of Physics Umeå University SE901 87 Umeå Sweden
mapequation.org IceLab CMOL Eigenfactor.org Carl Bergstrom Infobaleen