Complex networks

Physics is a powerful science. Nevertheless, the deterministic Newtonian science lacks words to describe complex behavior. Physics has been the tool that break things down into laws. This reductionist viewpoint has proved to be very successful, but it is now time to reassemble things into a whole again (Wilson1998). There is much to be learned on this journey, and a new science has to be developed.

Here I take the approach that complex systems are best understood through simple models. Simple, because it should be possible to understand the model in itself. Moreover, a model that contains everything that is known about the system can, by some fiddling with the parameters, in principle predict anything. And a model that can predict anything predicts nothing (Popper1963).


Networks are everywhere, have always been, and will always be—social networks, gene networks, cellular networks, ecological networks, computer networks, phone-call networks, airline networks, power-line networks—why this buzz now? The short answer is the Internet.

A longer answer is that mathematicians have had, since Leonard Euler solved the so called Königsberg bridge problem1 in 1736, the knowledge to abstract away details from a problem and reduce it to a graph2—a set of nodes and links. The strength with this mathematical object is of course that the nodes are not limited to represent land masses and the links limited to represent bridges. The list of networks above could have been made much longer, since so many systems can be simplified with maintained complexity by a network.

It matters to abstract a system into its underlying network when the interacting units are unique, like humans, proteins, and web pages, but not when they are interchangeable, like atoms, electrons, and spins3. It is therefore not surprising that sociologists, and not physicists, were the first to apply the abstract mathematical objects of graphs to concrete problems. This happend around the 1950s and one challenging question was “How small is the world?” (Kochen1989Milgram1967de Sola Pool and Kochen1978). At about the same time the two Hungarian mathematicians Paul Erdó  s and Alfréd Rényi started to develop a statistical theory of dynamic networks (Erdös and Rényi1959), originally proposed by Solomonoff and Rapoport (1951).

Why did the field of network research wait another 50 years to take off? The main problem the sociologists faced was the difficulty to create large reliable networks. Especially de Sola Pool and Kochen (1978) asked many fundamental questions, still important today, but they could not answer them in an adequate manner because of the poor experimental data. For example, to estimate how long the shortest chain of friends is between two people chosen at random, they first had to estimate how many friends an individual has. The two problems they faced were: what is a friend and why are people so poor at estimating their number of friends? Try to estimate your number of friends and you will realize that it is not straightforward.

Kochen and de Sola Pool instead turned to the mathematical models by Solomonoff and Rapoport (1951) and Erdös and Rényi (1959), and estimated the average separation to three. Milgram (1967) estimated, by tracing letters sent by chains of aquaintaces, the average distance to six. Obviously there was a problem here. The experiments predicted twice as large average separation as the models. Were the models wrong, or the experiments unsound?

One exception from small and unreliable data from experiments was presented in the pioneering work by Price (1965) on the network of citations between scientific articles. He found that the distribution of citations, corresponding to links in network language, was not peaked at an average value as in the mathematical models of random networks (Erdös and Rényi1959). Instead most of the articles had only a few citations and a few articles had many citations: the distribution of citations per article followed a power law. A decade later he presented the model called cumulative advantage based on an idea by Simon (1955) that could explain this observation (Price1976).4 However, the work by Price considered one particular network, and the concepts did not cross over to other sciences but remained an isolated work.

In the late 1990s, at the same time as the Internet started to become mature and useable5, all necessary ingredients were suddenly at hand (Barabási and Albert2002): the emergence of large databases on the topology of various real networks, increased computing power, the noticeable breakdown of boundaries between disciplines, and the new holistic viewpoint to try to understand the system as a whole. Again it became apparent that it is impossible to overestimate the importance of cross fertilization between theory and experiments in science.

The breakthrough originated in the physics community because the tools of statistical mechanics are particularly well suited to analyze the patterns of interactions in networks. For example, in the context of statistical mechanics, physicists have investigated how a system behaves on the macroscopic level from the properties of its constituents, the agents, on the microscopic level. The new idea was that behind every complex system there is an underlying network with a nonrandom topology (Barabási2001). It became clear that this pattern of interactions, which forms the network, plays a fundamental role in understanding the system (Newman2002b).

Real-world networks

The first network that was large enough for reliable statistical measurements, and at the same time accessible, turned out to be the World Wide Web (WWW).6 With the purpose to test a real-world network against the random network model (Erdös and Rényi1959), Barabási et al. (2000) analyzed a subdomain of the WWW consisting of more than 300,000 web pages. As in the study by Price (1965), the distribution of the number of links per node did not follow the predicted distribution from the random network model. Again it was power-law distributed. However, this study did not remain isolated, because a number of experiments followed shortly. Broder et al. (2000) repeated the experiment on a much larger section of the WWW and confirmed the results. Faloutsos et al. (1999) and Vázquez et al. (2002) showed that the distribution of physical connections in the Internet followed roughly the same distribution.

Were the discovered power-law tails of the distributions of links “much ado about nothing?” Yes and no. Yes, because the discovery has led to an unreasonable hunt for power laws in empirical studies and the construction of ad hoc models that produce power laws. No, because the few nodes with a very high number of links, the hubs of the network, have a dramatic effect on the properties of the network.7 Moreover, the broad distribution was discovered in other networks than technological as well. Watts and Strogatz (1998) studied movie actor collaborations, Fell and Wagner (2000) the metabolic network of the bacterium E. coli, Jeong et al. (2000) the metabolic networks of more than 40 organisms, Amaral et al. (2000) airline networks, Liljeros et al. (2001) sexual contacts, and we have for example studied city street networks (Rosvall et al.2005). The list can be made much longer, but the general result was that the networks did neither look like the random network model predicted, nor were they regular, and what they really look like is what the remaining of this text is about.

Network topology

I will begin this section with two examples of real-world networks. This will not only visualize the pattern of links connecting nodes, the network topology or the network structure, in two different systems. It will also illustrate the difference between a cumbersome and detailed, and an efficient but inexact way to extract the underlying network in a system.

Two network examples

The first network in Figure 1 is by Krebs (2002) and describes the September 11 terrorist network. With the aim to “uncover network patterns that would reveal Al Qaeda’s preferred methods of stealth organization,” Krebs gathered information from major newspapers such as the New York Times and the Wall Street Journal. Obviously this method is time consuming and inevitably incomplete.

The second network in Figure 2 consists of a subset of the 100 most influential people in 2006 selected by Time Magazine. In this case I have let Google create the network by asking how many hits two names get together compared with what one of the names gets. If the ratio is above a threshold, 5% in this case, I create a directed link between the two names pointing to the name I compare with. This is a very fast method, but gives of course only a proxy of the real network.


Figure 1: Terrorist network of the September 11 hijackers and associated (Krebs2002). In total 62 terrorists, with the terrorists on each plane identified by separate colors.


Figure 2: A subset (54) of the 100 most influential people in the world selected by Time magazine, and connected by Google in June 2006. A strong connection is represented by a dark link.

I have already stated that real-world networks are different from the random network model by Erdös and Rényi (1959). This difference refers to the ways the links are organized between the nodes—the network topology. It is for example easy to see that the links in the two networks in Figure 1 and Figure 2 are not randomly distributed, but how much, and in what way are they not? Before answering these questions, we need a benchmark to enable comparison between networks through proper null models. However, I have already used too many concepts that deserve to be defined rigorously before I use them, so it is time to take a step back.

Nodes and links

Technically speaking, a mathematician uses the term graph to represent a structure with a set of objects called vertices connected by links called edges. If the edges have directions (object A links to B, but B does not automatically link to A), the graph is directed. If the edges are also associated with a weight, that for example could represent the distance between two objects, the mathematician uses the term network. However, that network and not graph will be used from now on does not imply that this text is about directed weighted graphs. Instead, undirected and unweighted graphs are almost exclusively the focus of this text. This contradiction originates from the loose language outside mathematics and the tradition to use the term network as something that represents a real-world evolving system upon which distributed dynamic processes might exist (Newman et al.2006). I will also use the terms nodes and links instead of vertices and edges.

The simplest concepts beyond the nodes and links, still belonging to the local properties of a network, are the degree and the clustering. They will be explained in the following sections together with the shortest path. Degree, clustering, and shortest path were three major concepts in focus in the early studies of complex networks, not least because of their relevance to the fundamental questions raised by de Sola Pool and Kochen (1978).

Degree distribution

The degree kv is the number of links incident to node v and is consequently equal to the number of nearest neighbors of v. In physical literature the degree of a node is also known as the node’s connectivity. There is a spread in the node degree, described by the degree distribution, which is characterized by the distribution function P(k). P(k) gives the probability that a randomly chosen node in the network has exactly k links or equivalently k nearest neighbors. The degree distribution plays a central role in . Power-law degree distributions have already been mentioned, and refer to distributions that are proportional to P(k) k-γ. A real network does not of course exactly follow a given parametrization of a distribution, but if it is roughly P(k) k-γ it will be called a broad degree distribution in this text. The important feature in a network with a broad degree distribution is that most nodes have very few links, but a few nodes have very many links. The nodes with very many links are called hubs.

Degree correlations


Figure 3: The elementary step in the random rewiring introduced by Maslov and Sneppen (2002). The links euv and exy are chosen at random and rewired to become euy and evx provided that none of these links already existed.

Maslov and Sneppen (2002) developed a method to test for correlations in the degree of the nodes. The probability P(k0,k1) that two nodes of degree k0 and k1 are connected via a link is calculated and compared with the probability Pr(k0,k1) in a randomized version of the same network with preserved degree sequence. The idea is a reinvention of the conditional uniform test by Katz and Powell (1957), presented in a network context by Snijders (1991).

In the randomized counterpart of the original network, the null network, all nodes have the same degree as in the original network, but the connections are randomized. The random counterpart is created by first randomly selecting a pair of links, like euv and exy in Figure 3. The two links are then rewired in such a way that u becomes connected to y and v to x, or vice versa provided that none of these links already existed. The step is iterated until all information of the original network is lost.

If the above algorithm is repeated, a set of randomized counterparts makes average expectation values and standard-deviation measurements possible for any property of the network. The randomized networks will hereafter be called MS-randomized networks (from Maslov and Sneppen) to avoid confusion with the random networks, already mentioned but more thoroughly presented in . The ratio of a property between the original and MS-randomized versions uncovers deviations from random properties. The null-network approach is extremely powerful to separate nonrandom features from features that are only associated to a particular degree distribution and will be used over and over here.


Figure 4: A network as a degree landscape with mountains and valleys, with the altitude of a node proportional to its degree.

In Axelsen et al. (2006) we generalized the degree-organizational view (Pastor-Satorras et al.2001) of real-world networks with broad degree distributions (Barabási and Albert1999) in a landscape analogy with mountains (high-degree nodes) and valleys (low-degree nodes) (see Figure 4). For example, with the presented viewpoint, correlated degrees between adjacent nodes correspond to smooth landscapes (social networks) (Newman2002a), hierarchical networks to one-mountain landscapes (the Internet) (Trusina et al.2004), and networks with anti-correlated degrees without hierarchical features to rough landscapes with several mountains (Newman2003) (see Figure 5).


Figure 5: The terrorist network in Figure 1, here as degree landscapes. The left landscape corresponds to the original network. In the middle it is randomized with fixed degree distribution and to the right it is completely reshuffled without any constraints except for the number of nodes and links. The important difference between the left network and the middle network is that the latter essentially only contains one mountain. All high degree nodes have disappeared in the completely reshuffled network to the right. This is clear in the topological maps (bottom), where the mountain is washed out for the right network.

To quantify the topology, we measured the widths of the mountains and the separation between different mountains. We also generated ridged landscapes to model networks organized under constraints imposed by the space the networks are embedded in, associated to spatial or, in molecular networks, to functional localization.

Shortest path

The length of all links is here defined to be 1. The distance luv = lvu between two nodes u and v in the network is consequently defined to be the number of links traced on the shortest path between the nodes. A connected component is a set of nodes that all can be reached from each other, disconnected from other components of the network. If the network only contains one connected component, the network is connected. When this is true, it is straightforward to define the average path length l, as the average over all shortest paths

    -----2---- ∑
l = N (N  - 1)    luv.

The node that on average has shortest path length to all other nodes in the network can naturally be defined to be the center node.

If the network is disconnected the average reciprocal distance l-1, also called the efficiency, can be used instead of the average path length (Latora and Marchiori2001)

l-1 = -----2---- ∑  -1-.
      N (N  - 1)    luv

A measurement related to the shortest path is the betweenness. The betweenness of a node is the number of times the node is on the shortest path between all other pairs of nodes. The quantity was introduced to investigate the participation rates of nodes in networks with the shortest paths as proxies for communication channels (Freeman1977). An alternative center node in this context is the one with the highest betweenness.


The clustering coefficient C is the probability that the two nearest neighbors of a node also are nearest neighbors to one another. In the network, such a relation forms a triangle of links. Accordingly, a high clustering coefficient reflects an increased number of triangles in the network, and gives a hint about a dense topology on the local network level. A common definition of the clustering coefficient is given by Barrat and Weigt (2000):

C =  3N-△-,                              (3)
      N ⊔
where N is the number of triangles in the network and N is the number of connected triples of nodes in the network. Triangles are trios of nodes with every node connected to both of the others. Connected triples are trios of nodes with at least one connected to both of the others. The factor 3 compensates for the fact that each triangle corresponds to three connected triples. The factor constrains C to lie in the range between 0 and 1. In a social network, where nodes are persons and links symbolize friendship, C corresponds to the probability that two of one’s friends also are friends with one another. Whereas the shortest path is a global property of the network, clustering is a local property.

The drawback of the definition of the clustering above is that it is dependent on the global properties of the network. To overcome that problem we use the null-network approach. We think that the best way to calculate the clustering in a network is to count the number of triangles in the network and compare with the number of triangles in its random counterparts. This is a good way to avoid the direct effect of the degree distribution on the clustering, and we use this method in all presented papers.

Beyond simple measures

There are plenty of other measures invented to reveal the structure of real-world networks, too many to all be mentioned here. However, by just looking at the networks in Figure 1 and Figure 2 it is clear that they both have a higher order structure than the broad distributions of links and the many triangles. It looks like the terrorist network could easily be divided into three groups, and the network of influential people could be divided into two groups. By looking more closely at the names it is clear that political leaders form one group and people from showbiz form the other group, and that Oprah Winfrey falls somewhere in between.

Community structure

No one has contributed more than Newman (2004a) on how to detect groups in networks, or community structures, which they are often called. For example, Girvan and Newman (2002) presented a method based on the betweenness. They showed that it is possible to partition the network by successively removing the link with the highest betweenness until the network falls apart into two disconnected components. This brute force method can then be repeated on the components formed.

Later Newman and Girvan (2004) introduced a measure of the community structure under the name modularity. It resembles the process we perform when we look at the networks in Figure 1 and Figure 2 to identify the groups. A meaningful division of the networks is not only one in which there are few links between the groups—it is one where there are fewer than expected. They formulated this mathematically and have shown that it gives rise to meaningful partitions of networks (Newman2004b). Newman (2006) presented a method for the maximization of the modularity over possible divisions of a network based on a matrix method that is very efficient.

The other way to look at the problem is to study the formation process, and what dynamic features the community structure brings? This is indirectly the topic of the two final chapters of my thesis. The formation process has also been studied by for example Grönlund and Holme (2004), and the effect of the community structure on the spread of fads by Grönlund and Holme (2005). For example, Boguná and Pastor-Satorras (2002) showed that the epidemic threshold can be nonzero even in networks with power-law degree distribution if the network is modular or has high clustering.

Before going into more detail here, and to be able to connect the network topology with its intrinsic function and the underlying evolutionary process, the next chapter will be about network models.

1Seven bridges connected the four different land masses in Königsberg, and the question was, “Does there exist any single path that crosses all seven bridges exactly once each?”

2The difference between the mathematicians graph and the networks discussed here is subtle, but where a graph is mainly a mathematical object, a network often refers to a decentralized, naturally evolving system (Newman et al., 2006).

3Spin is the intrinsic angular momentum in elementary particles.

4This model is very similar to the preferential attachment model discussed in the next chapter.

5Notable: Ines Usman, the former Swedish secretary of transport and communications, said in 1996 that “The Internet is a fad that may just pass by.”

6The network of the WWW consists of web pages linked by the hyperlinks we click on to go from page to page.

7Pastor-Satorras and Vespignani (2001) showed that networks with power-law distribution of links per node may not have an epidemic threshold and a disease will therefore always spread.

Martin Rosvall
Associate professor 
+46 70 239 1973

Department of Physics
Umeå University
SE-901 87 Umeå
Are you looking for a PhD or postdoc position? Does the content of this page excite you? Please drop me an and let me know.
Short introduction to complex networks available in html format.
igroup Complete PhD thesis available in pdf format.
Carl Bergstrom