A computer is a device that can be instructed to carry out an arbitrary set of arithmetic or logical operations automatically. The ability of computers to follow a sequence of operations, called a program, make computers very flexible and useful. Such computers are used as control systems for a very wide variety of industrial and consumer devices. This includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer assisted design, but also in general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects millions of other computers.

Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The speed, power, and versatility of computers has increased continuously and dramatically since then.

Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored informationPeripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.


According to the Oxford English Dictionary, the first known use of the word “computer” was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: “I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number.” This usage of the term referred to a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations.[1]

The Online Etymology Dictionary gives the first attested use of “computer” in the “1640s, [meaning] “one who calculates,”; this is an “… agent noun from compute (v.)”. The Online Etymology Dictionary states that the use of the term to mean “calculating machine” (of any type) is from 1897.“ The Online Etymology Dictionary indicates that the “modern use” of the term, to mean “programmable digital electronic computer” dates from “… 1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine”.


Pre-20th century-The earliest counting device was probably a form of tally stick.

-calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers.[3][4] The use of counting rods is one example.

-The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BC.

-In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.

-The Antikythera mechanism is believed to be the earliest mechanical analog “computer”, according to Derek J. de Solla Price.[5] It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later.

-The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century.

-The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus.

-A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy.

-An astrolabe incorporating a mechanical calendar computer.

gear-wheels was invented by Abi Bakr of IsfahanPersia in 1235.

– Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendarastrolabe,[10] an early fixed-wired knowledge processing machine[11] with a gear train and gear-wheels,[12]circa 1000 AD.

-The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation.

-The slide rule was invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Aviation is one of the few fields where slide rules are still in widespread use, particularly for solving time–distance problems in light aircraft. To save space and for ease of reading, these are typically circular devices rather than the classic linear slide rule shape. A popular example is the E6B.

-In the 1770s Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automata) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically “programmed” to read instructions. Along with two other complex machines, the doll is at the Musée d’Art et d’Histoire of NeuchâtelSwitzerland, and still operates.

-The tide-predicting machine invented by Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.

-The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876 Lord Kelvin had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators.[14] In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers.

First computing device

Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the “father of the computer”,[15] he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unitcontrol flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.

-The machine was about a century ahead of its time. All the parts for his machine had to be made by hand — this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage’s failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine’s computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.

Analog computers

-During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.[18] The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson in 1872. 

-The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.

-The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. 

-This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious. By the 1950s the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (control systems) and aircraft (slide rule).

Digital computers


-By 1938 the United States Navy had developed an electromechanical analog computer small enough to use aboard a submarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well.

-Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.

-Vacuum tubes and digital electronic circuits

-The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.

-In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942,[26] the first “automatic electronic digital computer”.[27] This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.

-During World War II, the British at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newmanand his colleagues commissioned Flowers to build the Colossus.[28] He spent eleven months from early February 1943 designing and building the first Colossus.[29] After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[30] and attacked its first message on 5 February.

-Colossus was the world’s first electronic digital programmable computer.[18] It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1500 thermionic valves (tubes), but Mark II with 2400 valves, was both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding process

-The U.S.-built ENIAC[33] (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the US. Although the ENIAC was similar to the Colossus it was much faster and more flexible. Like the Colossus, a “program” on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches.It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC’s development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.

Modern computers

-Concept of modern computer

The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper,[35]On Computable Numbers. Turing proposed a simple device that he called “Universal Computing machine” and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing’s design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper.[36] Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.

-Stored programs

Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[28] With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid by Alan Turing in his 1936 paper. In 1945 Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report “Proposed Electronic Calculator” was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945.


The bipolar transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in computer designs, giving rise to the “second generation” of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space.

-Integrated circuits

The next great advance in computing power came with the advent of the integrated circuit. The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of DefenceGeoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.[46]

The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.[47] Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[48] In his patent application of 6 February 1959, Kilby described his new device as “a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated”.[49][50] Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[51] His chip solved many practical problems that Kilby’s had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby’s chip was made of germanium.

This new development heralded an explosion in the commercial and personal use of computers and led to the invention of the microprocessor. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term “microprocessor”, it is largely undisputed that the first single-chip microprocessor was the Intel 4004,[52] designed and realized by Ted HoffFederico Faggin, and Stanley Mazor at Intel.[53]

-Mobile computers become dominant

With the continued miniaturization of computing resources, and advancements in portable battery life, portable computers grew in popularity in the 2000s.[54]The same developments that spurred the growth of laptop computers and other portable computers allowed manufacturers to integrate computing resources into cellular phones. These so-called smartphones and tablets run on a variety of operating systems and have become the dominant computing device on the market, with manufacturers reporting having shipped an estimated 237 million devices in 2Q 2013.


kernel -Part 3

What is a kernel?  If you spend any time reading Android forums, blogs, how-to posts or online discussion you’ll soon hear people talking about the kernel.  A kernel isn’t something unique to Android – iOS and MacOS have one, Windowshas one, BlackBerry’s QNX has one, in fact all high level operating systems have one.  The one we’re interested in is Linux, as it’s the one Android uses. Let’s try to break down what it is and what it does.

Android devices use the Linux kernel, bet every phone uses their own version of it. Linux kernel maintainers keep everything tidy and available, contributors (like Google) add or alter things to better meet their needs, and the people making the hardware contribute as well, because they need to develop hardware drivers for the parts they’re using for the kernel version they’re using.  This is why it takes a while for independent Android developers and hackers to port new versions to older devices and get everything working.  Drivers written to work with one version of the kernel for a phone might not work with a different version of software on the same phone. And that’s important, because one of the kernel’s main functions is to control the hardware.  It’s a whole lot of source code, with more options while building it than you can imagine, but in the end it’s just the intermediary between the hardware and the software.

When software needs the hardware to do anything, it sends a request to the kernel.  And when we say anything, we mean anything.  From the brightness of the screen, to the volume level, to initiating a call through the radio, even what’s drawn on the display is ultimately controlled by the kernel.  For example – when you tap the search button on your phone, you tell the software to open the search application.  What happens is that you touched a certain point on the digitizer, which tells the software that you’ve touched the screen at those coordinates.  The software knows that when that particular spot is touched, the search dialog is supposed to open.  The kernel is what tells the digitizer to look (or listen, events are “listened” for) for touches, helps figure out where you touched, and tells the system you touched it.  In turn, when the system receives a touch event at a specific point from the kernel (through the driver) it knows what to draw on your screen.  Both the hardware and the software communicate both ways with the kernel, and that’s how your phone knows when to do something.  Input from one side is sent as output to the other, whether it’s you playing Angry Birds, or connecting to your car’s Bluetooth.  

It sounds complicated, and it is.  But it’s also pretty standard computer logic — there’s an action of some sort generated for every event, and depending on that action things happen to the running software. Without the kernel to accept and send information, developers would have to write code for every single event for every single piece of hardware in your device. With the kernel, all they have to do is communicate with it through the Android system API’s, and hardware developers only have to make the device hardware communicate with the kernel. The good thing is that you don’t need to know exactly how or why the kernel does what it does, just understanding that it’s the go-between from software to hardware gives you a pretty good grasp of what’s happening under the glass.  


Kernel -Part 3B

This is a term for the computing elite, so proceed at your own risk. To understand what a kernel is, you first need to know that today’s operating systems are built in “layers.” Each layer has different functions such as serial port access, disk access, memory management, and the user interface itself. The base layer, or the foundation of the operating system, is called the kernel. The kernel provides the most basic “low-level” services, such as the hardware-software interaction and memory management. The more efficient the kernel is, the more efficiently the operating system will run.

When referring to an operating system, the kernelis the first section of the operating system to load into memory. As the center of the operating system, the kernel need to be small, efficient and loaded into a protected area in the memory; so as not to be overwritten. It can be responsible for such things as disk drive management, interrupthandler, file management, memory management, process management, etc.

Kernel -Part 2

The kernel is a program that constitutes the central core of a computer operating system. It has complete control over everything that occurs in the system.

A kernel can be contrasted with a shell (such as bashcsh or ksh in Unix-likeoperating systems), which is the outermost part of an operating system and a program that interacts with user commands. The kernel itself does not interact directly with the user, but rather interacts with the shell and other programs as well as with the hardware devices on the system, including the processor (also called the central processing unit or CPU), memory and disk drives.

The kernel is the first part of the operating system to load into memory during booting (i.e., system startup), and it remains there for the entire duration of the computer session because its services are required continuously. Thus it is important for it to be as small as possible while still providing all the essential services needed by the other parts of the operating system and by the various application programs.

When a computer crashes, it actually means the kernel has crashed. If only a single program has crashed but the rest of the system remains in operation, then the kernel itself has not crashed. A crash is the situation in which a program, either a user application or a part of the operating system, stops performing its expected function(s) and responding to other parts of the system. The program might appear to the user to freeze. If such program is a critical to the operation of the kernel, the entire computer could stall or shut down.

The kernel provides basic services for all other parts of the operating system, typically including memory management, process management, file management and I/O (input/output) management (i.e., accessing the peripheral devices). These services are requested by other parts of the operating system or by application programs through a specified set of program interfaces referred to as system calls.

Process management, possibly the most obvious aspect of a kernel to the user, is the part of the kernel that ensures that each process obtains its turn to run on the processor and that the individual processes do not interfere with each other by writing to their areas of memory. A process, also referred to as a task, can be defined as an executing (i.e., running) instance of a program.

The contents of a kernel vary considerably according to the operating system, but they typically include (1) a scheduler, which determines how the various processes share the kernel’s processing time (including in what order), (2) a supervisor, which grants use of the computer to each process when it is scheduled, (3) an interrupt handler, which handles all requests from the various hardware devices (such as disk drives and the keyboard) that compete for the kernel’s services and (4) a memory manager, which allocates the system’s address spaces (i.e., locations in memory) among all users of the kernel’s services.

The kernel should not be confused with the BIOS (Basic Input/Output System). The BIOS is an independent program stored in a chip on the motherboard (the main circuit board of a computer) that is used during the booting process for such tasks as initializing the hardware and loading the kernel into memory. Whereas the BIOS always remains in the computer and is specific to its particular hardware, the kernel can be easily replaced or upgraded by changing or upgrading the operating system or, in the case of Linux, by adding a newer kernel or modifying an existing kernel.

Most kernels have been developed for a specific operating system, and there is usually only one version available for each operating system. For example, the Microsoft Windows 2000 kernel is the only kernel for Microsoft Windows 2000 and the Microsoft Windows 98 kernel is the only kernel for Microsoft Windows 98. Linux is far more flexible in that there are numerous versions of the Linux kernel, and each of these can be modified in innumerable ways by an informed user.

A few kernels have been designed with the goal of being suitable for use with any operating system. The best known of these is the Mach kernel, which was developed at Carnegie-Mellon University and is used in the Macintosh OS X operating system.

The term kernel is frequently used in books and discussions about Linux, whereas it is used less often when discussing some other operating systems, such as the Microsoft Windows systems. The reasons are that the kernel is highly configurable in the case of Linux and users are encouraged to learn about and modify it and to download and install updated versions. With the Microsoft Windows operating systems, in contrast, there is relatively little point in discussing kernels because they cannot be modified or replaced.

Categories of Kernels

Kernels can be classified into four broad categories: monolithic kernelsmicrokernelshybrid kernels and exokernels. Each has its own advocates and detractors.

Monolithic kernels, which have traditionally been used by Unix-like operating systems, contain all the operating system core functions and the device drivers(small programs that allow the operating system to interact with hardware devices, such as disk drives, video cards and printers). Modern monolithic kernels, such as those of Linux and FreeBSD, both of which fall into the category of Unix-like operating systems, feature the ability to load modules at runtime, thereby allowing easy extension of the kernel’s capabilities as required, while helping to minimize the amount of code running in kernel space.

A microkernel usually provides only minimal services, such as defining memory address spaces, interprocess communication (IPC) and process management. All other functions, such as hardware management, are implemented as processes running independently of the kernel. Examples of microkernel operating systems are AIX, BeOS, Hurd, Mach, Mac OS X, MINIX and QNX.

Hybrid kernels are similar to microkernels, except that they include additional code in kernel space so that such code can run more swiftly than it would were it in user space. These kernels represent a compromise that was implemented by some developers before it was demonstrated that pure microkernels can provide high performance. Hybrid kernels should not be confused with monolithic kernels that can load modules after booting (such as Linux).

Most modern operating systems use hybrid kernels, including Microsoft Windows NT, 2000 and XP. DragonFly BSD, a recent fork (i.e., variant) of FreeBSD, is the first non-Mach based BSD operating system to employ a hybrid kernel architecture.

Exokernels are a still experimental approach to operating system design. They differ from the other types of kernels in that their functionality is limited to the protection and multiplexing of the raw hardware, and they provide no hardware abstractions on top of which applications can be constructed. This separation of hardware protection from hardware management enables application developers to determine how to make the most efficient use of the available hardware for each specific program.

Exokernels in themselves they are extremely small. However, they are accompanied by library operating systems, which provide application developers with the conventional functionalities of a complete operating system. A major advantage of exokernel-based systems is that they can incorporate multiple library operating systems, each exporting a different API (application programming interface), such as one for Linux and one for Microsoft Windows, thus making it possible to simultaneously run both Linux and Windows applications.

The Monolithic Versus Micro Controversy

In the early 1990s, many computer scientists considered monolithic kernels to be obsolete, and they predicted that microkernels would revolutionize operating system design. In fact, the development of Linux as a monolithic kernel rather than a microkernel led to a famous flame war (i.e., a war of words on the Internet) between Andrew Tanenbaum, the developer of the MINIX operating system, and Linus Torvalds, who originally developed Linux based largely on MINIX.

Proponents of microkernels point out that monolithic kernels have the disadvantage that an error in the kernel can cause the entire system to crash. However, with a microkernel, if a kernel process crashes, it is still possible to prevent a crash of the system as a whole by merely restarting the service that caused the error. Although this sounds sensible, it is questionable how important it is in reality, because operating systems with monolithic kernels such as Linux have become extremely stable and can run for years without crashing.

Another disadvantage cited for monolithic kernels is that they are not portable; that is, they must be rewritten for each new architecture (i.e., processor type) that the operating system is to be used on. However, in practice, this has not appeared to be a major disadvantage, and it has not prevented Linux from being ported to numerous processors.

Monolithic kernels also appear to have the disadvantage that their source codecan become extremely large. Source code is the version of software as it is originally written (i.e., typed into a computer) by a human in plain text (i.e., human readable alphanumeric characters) and before it is converted by a compiler into object code that a computer’s processor can directly read and execute.

For example, the source code for the Linux kernel version 2.4.0 is approximately 100MB and contains nearly 3.38 million lines, and that for version 2.6.0 is 212MB and contains 5.93 million lines. This adds to the complexity of maintaining the kernel, and it also makes it difficult for new generations of computer science students to study and comprehend the kernel. However, the advocates of monolithic kernels claim that in spite of their size such kernels are easier to design correctly, and thus they can be improved more quickly than can microkernel-based systems.

Moreover, the size of the compiled kernel is only a tiny fraction of that of the source code, for example roughly 1.1MB in the case of Linux version 2.4 on a typical Red Hat Linux 9 desktop installation. Contributing to the small size of the compiled Linux kernel is its ability to dynamically load modules at runtime, so that the basic kernel contains only those components that are necessary for the system to start itself and to load modules.

The monolithic Linux kernel can be made extremely small not only because of its ability to dynamically load modules but also because of its ease of customization. In fact, there are some versions that are small enough to fit together with a large number of utilities and other programs on a single floppy disk and still provide a fully functional operating system (one of the most popular of which is muLinux). This ability to miniaturize its kernel has also led to a rapid growth in the use of Linux in embedded systems(i.e., computer circuitry built into other products).

Although microkernels are very small by themselves, in combination with all their required auxiliary code they are, in fact, often larger than monolithic kernels. Advocates of monolithic kernels also point out that the two-tiered structure of microkernel systems, in which most of the operating system does not interact directly with the hardware, creates a not-insignificant cost in terms of system efficiency.



A kernel is the core component of an operating system. Using interprocess communication and system calls, it acts as a bridge between applications and the data processing performed at the hardware level.

When an operating system is loaded into memory, the kernel loads first and remains in memory until the operating system is shut down again. The kernel is responsible for low-level tasks such as disk management, task management and memory management.

A computer kernel interfaces between the three major computer hardware components, providing services between the application/user interface and the CPU, memory and other hardware I/O devices.

The kernel provides and manages computer resources, allowing other programs to run and use these resources. The kernel also sets up memory address space for applications, loads files with application code into memory, sets up the execution stack for programs and branches out to particular locations inside programs for execution.

The kernel is responsible for:

  • Process management for application execution
  • Memory management, allocation and I/O
  • Device management through the use of device drivers
  • System call control, which is essential for the execution of kernel services

There are five types of kernels:

  1. Monolithic Kernels: All operating system services run along the main kernel thread in a monolithic kernel, which also resides in the same memory area, thereby providing powerful and rich hardware access.
  2. Microkernels: Define a simple abstraction over hardware that use primitives or system calls to implement minimum OS services such as multitasking, memory management and interprocess communication.
  3. Hybrid Kernels: Run a few services in the kernel space to reduce the performance overhead of traditional microkernels where the kernel code is still run as a server in the user space.
  4. Nano Kernels: Simplify the memory requirement by delegating services, including the basic ones like interrupt controllers or timers to device drivers.
  5. Exo Kernels: Allocate physical hardware resources such as processor time and disk block to other programs, which can link to library operating systems that use the kernel to simulate operating system abstractions.


OS vs Kernel -Part 1

The Operating System is a generic name given to all of the elements (user interface, libraries, resources) which make up the system as a whole.

The kernel is “brain” of the operating system, which controls everything from access to the hard disk to memory management. Whenever you want to do anything, it goes though the kernel.

a kernel is part of the operating system, it is the first thing that the boot loader loads onto the cpu (for most operating systems), it is the part that interfaces with the hardware, and it also manages what programs can do what with the hardware, it is really the central part of the os, it is made up of drivers, a driver is a program that interfaces with a particular piece of hardware, for example: if I made a digital camera for computers, I would need to make a driver for it, the drivers are the only programs that can control the input and output of the computer

The Kernel is the core piece of the operating system. It is not necessarily an operating system in and of itself.

Everything else is built around it.

In computing, the ‘kernel’ is the central component of most computer operating systems; it is a bridge between applications and the actual data processing done at the hardware level. The kernel’s responsibilities include managing the system’s resources (the communication between hardware and software components). 


Distribution Group and Powershell

Manage Distribution Groups by using PowerShell | Office 365

Manage Distribution Group using PowerShell in Office 365 | Adding members to existing Distribution Group | Part 3#5

Question:How To Add Users To A Distribution Group From A .CSV File In Exchange Server 2010

Add Users to a Distribution Group from a CSV File (Exchange 2010)

Add multiple members to a distribution group in O365

Exchange 2010/2013 Bulk Add Members to Distribution Groups From CSV

How to add multiple users (email address) to an Exchange 2010 distribution group via Powershell V1.0


Powershell 2: Add multiple members to distribution group with one call?

Add bulk users to Distribution Group using Powershell Cmdlet

Add users to a distribution group from a .csv file in exchange 2007\2010 Powershell

Exchange Server 2010 Windows PowerShell: Working with Distribution Groups

Tech Tip | Creating Office 365 Distribution Groups by CSV and Adding Members by CSV

Bulk add users to Distribution Group

is ADD-distributiongroupmember a cmdlet in powershell to add members into a distribution list

How can I add members to a new distribution group from a CSV file?

Bulk add members to a distribution list

Add Mailbox Users to Distribution Lists with PowerShell

Add multiple users to multiple groups from one import csv (follow up query)

Import distribution groups to Office 365: Exchange Online with PowerShell

Two Ways to Add Multiple Users or Contacts to a Distribution Group

point mailbox to another AD account

How to attach existing user mailbox to a new user in different domain

Understanding the Mailbox Move Request in Exchange 2010

Moving Mailboxes the Exchange 2010 Way

How to move mailbox from one Exchange 2010 to another (different domain)?

Moving Mailboxes in Exchange 2010 (Part 1)

How to Move Mailboxes in Exchange Server 2010

Easiest method to move Exchange mailboxes to new server?

Migrating Exchange 2010 mailboxes between domains how to….

How to change owner of a mailbox

Re-assign an exchange 2010 mailbox to another user

assign existing mailbox to a new user account

exchange 2010 reasign mailbox to another user

Re-assign an exchange 2010 mailbox to another user

Re-assign an exchange 2010 mailbox to another user

Exchange 2010 – Link mailbox with another AD account

How can I change which AD account an Exchange mailbox is connected to without deleting the account?

how to I assign an existing mailbox to a new user in Exchange 2010

Moving a Exchange 2007/2010 mailbox to another user

Move Exchange Mailbox Data “User A” to “User B” – User “A” got fired

Create new AD user accounts for all users, keep old mailbox/email address

Changing account of linked mailbox

Re-assign existing mailbox to new user

Existing mailbox assign to another user

moving mailboxes from one AD account to another – GUI

link old mailbox to New AD user account

HOW TO: Reconnect a mailbox to another user

Understanding Move Requests

How can I change which AD account an Exchange mailbox is connected to without deleting the account?

Check if an emailis address is valid


Detailed Guide On How To Do Mailbox Pinging

Use Telnet to test SMTP communication on Exchange servers

How to check an SMTP connection with a manual telnet session

How to check if an email address exists without sending an email?

Communicating with an email server using TELNET

How to Test SMTP Server from the Command Line via Telnet and in Online Tools

How Do I Ping an Email Address?

Resolving mail server problems with Nslookup and Telnet


What are Traceroute, Ping, Telnet and Nslookup commands?

SMTP Telnet Test

Setting the Source Address for Telnet

How to Send an SMTP Email

How to Test an Email Server with the Telnet Client

Cannot email or telnet on port 25 to external mail server…

Using Telnet To Check SMTP Service

How to Test SMTP Settings Using Telnet

Sending An Email Using Telnet

Testing for network connectivity

Test Your SMTP Mail Server via Telnet

SMTP Test via Telnet

How to test network connectivity with telnet

can’t telnet!

Test SMTP connection

How to use Telnet to Send a Test Message and Verify SMTP Communication

How to check if an email address really exists

Troubleshooting with Telnet

System: Emulating an HTTP/1.1 Request using Telnet

How to Verify an Email Address?

How to Verify if an Email Address Is Real or Fake

How to check if an email address exists without sending an email?

How to check if an email address really exists

Command Line


How to check whether an email address exists?

How to verify Email Addresses in Outlook before sending a message.

How to Verify If an Email Address Is Valid

How to check if an email address exists without sending an email?

How to Verify an Email Address

Connecting to Exchange Server

I want to remotely connect Sharepoint server through powershell whose database is on different machine

How to Manage PowerShell Remote Access with SharePoint Server 2013

How to connect to an Exchange server via PowerShell

Use PowerShell Remoting to Manage SQL Servers Efficiently

PowerShell Function to Connect to Exchange On-Premises

Run Powershell script remotely but not Exchange server

Connect to Exchange servers using remote PowerShell

Connecting to Office 365 using Powershell

How to start remote PowerShell session to Exchange or Office 365

How do I set up a remote PowerShell session with Exchange 2010+?

Connecting to Exchange 2010 with PowerShell

Connecting to Exchange Online with remote PowerShell from a Mac

Removing Unwanted Trailing/Beginning/Ending Spaces or Characters In an Excel Cell/Column

How To Remove Leading or Trailing Space in Excel 2013

How to remove white space from a number

How to remove leading spaces from Excel cells

Cleaning Up Data in Excel

Remove Extra Spaces from Lookup Values with TRIM

How To Remove Spaces In Excel Using The Function TRIM (Step-By-Step)

Importing Data Into Excel 2007 and Using the TRIM, SUBSTITUTE and CLEAN Functions to Remove Non-Printable Characters

How to Remove Extra Spaces in Excel

How to Remove Leading and Trailing Spaces in Excel

Remove excess spaces from data in Microsoft Excel

A quick way to delete blank rows in Excel

Excel Tip: Remove Spaces and Line Breaks from Cells

How to Remove Leading Spaces in Excel 2013

Remove Spaces in Excel Leading, Trailing, and Double


How to Remove Stubborn Spaces and Characters in an Excel Cell

Clean Excel Data With TRIM and SUBSTITUTE

Space Character: Remove Last Letter in Excel If It Is Blank

Excel Remove Leading and Trailing Spaces

How to Delete Leading and Trailing CSV Spaces in a Flash

How to remove spaces and characters in a cell

Excel Tip: Removing all spaces from text

Excel: Remove leading and trailing spaces

Join cells in Excel and remove excess commas between elements

Concatenate & Removing Spaces

How to Remove Spaces in Cells in Excel

Remove Extra Spaces from Excel Data

Remove leading and trailing spaces from text

3 ways to remove spaces between words / numbers in Excel cells

Excel TRIM function – quick way to remove extra spaces

How To Remove Leading Spaces In Cells In Excel?

How to remove a single Leading space in the numeric column in Excel 2013

Remove white space at the end of text in Excel

How to Remove Leading and Trailing Spaces in an Entire Column in Excel & Google Docs

Remove Extra Spaces

How to Remove Leading Blank Spaces in Excel

Excel TRIM Function Removes Spaces From Text

How to Remove Spaces Between Characters and Numbers in Excel

Excel Trim: Master the Ability to Clean Data in Excel

How to Remove Trailing Spaces From Character Values

Excel’s CLEAN Function is More Powerful Than You Think

How To Remove Leading And Trailing Spaces In Excel?

Remove Spaces in Excel – Leading, Trailing, and Double

Remove Leading & Trailing Spaces in Microsoft Excel 2010

Excel Trailing Spaces

How to Remove Whitespace From a String in Excel

Excel TRIM function for removing spaces

Excel Text and String Functions: TRIM & CLEAN

How do I delete unwanted spaces in multiple cells?

TRIM Function In Excel

Cleaning Up Text Data in Excel with TRIM Function

The TRIM function in Excel

Remove leading or trailing spaces in an entire column of data

Excel formula to remove space between words in a cell

How to check for spaces before or after a string in excel

Removing Spaces in Excel [closed]

How to remove space or “nextline” in front of date in Excel?

Trim leading whitespace from Excel cells

Editing A Cells Shows Formula Instead of Result

Display the Actual Cell Values When Creating or Editing a Formula

Display the Actual Cell Values When Creating or Editing a Formula

Excel Tips: Display the Actual Cell Values When Creating or Editing a Formula

Excel cells- show displayed text instead of value in Data Validation list

Excel formula displays formula rather than result

Excel – display text different to the actual value?

Top 5 reasons why cell displays formula instead of results in Excel

Excel Tip: Change Formula to Actual Value

Replace a formula with its result

Excel displays formula rather than result

Excel formula showing up as text not value??

Converting a Formula to a Value

Excel – cell displays formula instead of results

Formulas Appearing in Cell Instead of Result – EXCEL for iMac

Display Formulas Instead of Results

Excel formulas showing as text instead of displaying result

Excel Formulas are not working?!? What to do when all you see is the formula, not result

Show Formulas in Excel Instead of the Values

Show Excel Formulas Instead of Results

6 Methods to Convert a Formula to Value in Excel

How to replace formulas with their values in Excel

Excel cell shows formula and not the result

Excel Worksheet Shows Formulas, Not Results

Excel formulas showing instead of values after QuickBooks export


How to convert formulas to values

How To Replace Formulas With Results Or Value In Excel?

How to Convert Formulas to Values in Excel

Change formulas to values

Convert cell formula to value – short cut key

Display Specific rows/columns of a file

Fetch only first line from text file using Windows Batch File

How to select the first 10 columns of a headerless csv file using PowerShell?

How to remove First and Last Line in Powershell

#45 : Display top n lines or last n lines of a file

Copying the First n Lines of a Text File using PowerShell

PowerTip: Read First Line of File with PowerShell

Introduction To Using Text Files With PowerShell

Display first n lines, lines in middle or last n lines of a file with powershell

Ways to grep a file in Powershell

tail + grep powershell equivalent

How to GREP in Powershell

Grep for Windows – findstr example

How to “grep” for multiple strings in files on Windows with PowerShell

HOW TO – GREP / FIND PowerShell Output

Comparison of grep command and sls (Select-String) of PowerShell

Grep, Search, Loops and Basename for Powershell Hotness


Grep-ing in Powershell

Grep like command in Windows?

Grep in PowerShell

PowerShell equivalent to grep -f

GREP in Powershell

Find / grep in PowerShell

Powershell and Grep

5. Understand the pipeline is object-based not text-based

Grep with Powershell

Emulating Grep in Powershell

PowerShell – Regular Expressions

The Powershell equivalent to grep

Searching for multiple strings in a file [closed]

Find multiple strings in text files powershell

Upgrade or Migrate?

Should you upgrade or migrate?

Migration lets you to move the configuration of an existing server to a new server computer. Migrations are often selected over upgrades because the process is less destructive and more recoverable. Depending on the services and functionality that you need to migrate to a new server instance, there will be different requirements and actions you need to take. Fundamentally though, migration can be broken down into three phases.


  • Installing, running migration tools and identifying any prerequisites. For example, drivers and ports.
  • Preparing source server. For example, backing up your data.
  • Preparing destination server. For example, ensuring drivers and ports are available


  • Exporting or migrating data from source server.
  • Importing or migrating data to destination server.


  • Verify destination server is running successfully.
  • Decommission source server.

Post-Installation Configuration steps

The post-installation process involves configuring all of the other settings that the server requires before it can be deployed to a production environment.

  1. Configure the IP address
  2. Set the computer name
  3. Join the domain
  4. Configure the time zone
  5. Enable automatic updates
  6. Add roles and features
  7. Enable remote desktop
  8. Configure Windows Firewall settings

Post-Installation Configuration steps

The post-installation process involves configuring all of the other settings that the server requires before it can be deployed to a production environment.

  1. Configure the IP address
  2. Set the computer name
  3. Join the domain
  4. Configure the time zone
  5. Enable automatic updates
  6. Add roles and features
  7. Enable remote desktop
  8. Configure Windows Firewall settings

What is Server Manager?

Server Manager is the primary graphical tool used to manage both local and remote servers. With Server Manager you can create groups of servers. This enables you to perform administrative tasks quickly across multiple servers that perform the same role, or are members of the same group.  Additionally, Server Manager provides access to many administrative tools.

What are Windows Server Features?

Windows Server 2012 features are independent components that often support role services or support the server directly. For example, Windows Server Backup is a feature because it only provides backup support for the local server. It is not a resource that other servers on the network can use.

The Add Roles and Features Wizard and the Remove Roles and Features Wizard in Server Manager modifies the features that are installed on the server.

Windows Server `12 -Server Roles

Which server role enables you to centrally configure, mange, and provide temporary IP addresses and related information for client computers?

Dynamic Host Configuration Protocol (DHCP) Server. The DHCP server enables you to centrally configure, manage, and provide temporary IP addresses and related information for client computers. IP addresses are used to uniquely identify the client computers on your network. 

Which server role provides the services that you can use to create and manage virtual machines and their resources?

Hyper-V Server. The Hyper-V Server provides services to create and manage virtual machines and their resources. Each virtual machine is a virtualized computer system that operates in an isolated execution environment. This allows you to run multiple operating systems simultaneously. etwork. 

Which server role provides a reliable, manageable, and scalable Web application infrastructure?

Web Server (IIS). The Web Server provides a reliable, manageable, and scalable Web application infrastructure. IIS supports hosting of Web content in production environments.etwork. 

Which server role stores information about objects on the network and makes this information available to users and network administrators?

Active Directory Domain Services (AD DS) Server. The AD DS server stores information about objects on the network and makes this information available to users and network administrators. Servers that run the AD DS Server role are called Domain Controllers. These servers provide network users access to resources through a single logon process. 

Which server role allows network administrators to specify the Microsoft updates that should be installed on different computers?

Windows Server Update Services (WSUS) Server. The WSUS server allows network administrators to specify the Microsoft updates that should be installed on different computers. Keeping your computers updated with the latest updates is an important part of securing the network. With WSUS you can automate this process and create different update schedules for your computers 


Which server role enables you to centrally configure, mange, and provide temporary IP addresses and related information for client computers?

Show Answer

Dynamic Host Configuration Protocol (DHCP) Server. The DHCP server enables you to centrally configure, manage, and provide temporary IP addresses and related information for client computers. IP addresses are used to uniquely identify the client computers on your network.

Which server role provides the services that you can use to create and manage virtual machines and their resources?

Show Answer

Hyper-V Server. The Hyper-V Server provides services to create and manage virtual machines and their resources. Each virtual machine is a virtualized computer system that operates in an isolated execution environment. This allows you to run multiple operating systems simultaneously. etwork.

Which server role provides a reliable, manageable, and scalable Web application infrastructure?

Show Answer

Web Server (IIS). The Web Server provides a reliable, manageable, and scalable Web application infrastructure. IIS supports hosting of Web content in production environments.etwork.

Which server role stores information about objects on the network and makes this information available to users and network administrators?

Show Answer

Active Directory Domain Services (AD DS) Server. The AD DS server stores information about objects on the network and makes this information available to users and network administrators. Servers that run the AD DS Server role are called Domain Controllers. These servers provide network users access to resources through a single logon process.

Which server role allows network administrators to specify the Microsoft updates that should be installed on different computers?

Show Answer

Windows Server Update Services (WSUS) Server. The WSUS server allows network administrators to specify the Microsoft updates that should be installed on different computers. Keeping your computers updated with the latest updates is an important part of securing the network. With WSUS you can automate this process and create different update schedules for your computers.

Which server feature allows multiple servers to work together to provide high availability of server roles?

Show Answer

Failover Clustering. Failover clustering is often used for File Services, virtual machines, database applications, and mail applications.

Which server feature includes snap-ins and command line tools for remotely managing roles and features?

Show Answer

Remote Server Administration Tools (RSAT). RSAT Tools are divided into Feature Administration Tools and Role Administration Tools. Feature Administration Tools include Failover Clustering Tools, IPAM Client, and Network Load Balancing Tools. Role Administration Tools include Hyper-V Management Tools, DHCP Server Tools, and Remote Access Management Tools.

Which server feature distributes network traffic across several servers, using the TCP/IP protocol?

Show Answer

Network Load Balancing (NLB). NLB is particularly useful for ensuring stateless applications, such as Web Servers running IIS, are scalable by adding additional services as the load increases.

Which server feature includes Windows PowerShell cmdlets that facilitate migration of server roles, operating system settings, files, and shares from computers that are running earlier versions of Windows Server?

Show Answer

Windows Server Migration Tools. Windows Server Migration Tools can also facilitate migration from one computer that is running Windows Server 2012 to another server that is running Windows Server 2012. For example when you are creating a backup server.

Which server feature provides a central framework for managing your IP address space and DHCP and DNS servers?

Show Answer

IP Address Management Server (IPAM). IPAM supports automated discovery of DHCP and DNS servers in the Active Directory forest. IPAM can also track and monitor IPv4 and IPv6 addresses, as well as providing utilization tools.

Active Directory (AD)

What are subnets? 

Subnets map network addresses to sites

Subnets identify the network addresses that map computers to AD DS sites. A subnet is a segment of a TCP/IP network to which a set of logical IP addresses are assigned. A site can consist of one or more subnets.

Keep your subnet information up to date

When you design your AD DS site configuration, it’s critical that you correctly map IP subnets to sites. Similarly, if the underlying network configuration changes, make sure that you update the configuration to reflect the new site mapping. Domain controllers use the AD DS subnet information to map client computers and servers to sites. If this mapping isn’t accurate, operations such as logon traffic and applying GPOs are likely to occur across WAN links, and may be disruptive.

When to create more OUs? 

Although you can manage a small organization without creating additional OUs, even small organizations typically create an OU hierarchy. An OU hierarchy lets you subdivide the administration of your domain for management purposes. There are basically two reasons to create OUs.

Application of GPOs. To group objects together to make it easier to manage them by applying Group Policy Objects (GPOs) to the whole group. You can link GPOs to the OU, and the settings apply to all objects within the OU. For example, you create an OU for contractors who have different security requirements than full-time employees.

Delegation of control. To delegate administrative control of objects within the OU. You can assign management permissions on an OU, thereby delegating control of that OU to an AD DS user or group. For example, you create an OU to manage a satellite office in a different geographical location. Then, you delegate control of the OU to a group.

What is Active Directory Domain Services (AD DS)?

Active Directory Domain Services (AD DS)is a scalable, secure, and manageable infrastructure for user and resource management. AD DS is a Windows Server role that’s installed and hosted on a server known as a domain controller. AD DS uses Lightweight Directory Access Protocol (LDAP) to access, search, and change the directory service. LDAP is a based on the X.500 standard and TCP/IP.  

AD DS provides a centralized system for managing users, computers, and other resources on the network. AD DS features a centralized directory, single sign-on access, integrated security, scalability, and a common management interface.

The benefits and restrictions of Restartable AD DS feature

  • Reduces the time that is required for performing maintenance tasks.
  • Other services running on that domain controller can server client requests
  • Continue to log on to the domain controller using a domain admin account while AD DS service is stopped
  • Continue to run the other services running on that domain controller and server the client requests.
  • You are not allowed to run DCPROMO while AD DS service is stopped but you can use /Forceremoval switch with DCPROMO.
  • You cannot perform a system restore while the AD DS service is stopped. The system restore operation must always be executed from DSRM mode.
  • Stopping the Active Directory Domain Service will impact the ability to authenticate domain clients and Active Directory applications.

REVISE: WinRM and PSRemoting



This command configures the computer to receive remote commands.

The Enable-PSRemoting cmdlet configures the computer to receive Windows PowerShell remote commands that are sent.

to enable Windows PowerShell remoting on other supported versions of Windows

You need to run this command only once on each computer that will receive commands. You do not need to run it on computers that only send commands. Because the configuration activates listeners, it is prudent to run it only where it is needed.

To run this cmdlet, start Windows PowerShell with the “Run as administrator” option.

CAUTION: On systems that have both Windows PowerShell 3.0 and the Windows PowerShell 2.0 engine, do not use Windows PowerShell 2.0 to run the Enable-PSRemoting and Disable-PSRemoting cmdlets. The commands might appear to succeed, but the remoting is not configured correctly. Remote commands, and later attempts to enable and disable remoting, are likely to fail.

  • In Windows PowerShell 3.0, Enable-PSRemoting creates the following firewall exceptions for WS-Management communications.On server versions of Windows, Enable-PSRemoting creates firewall rules for private and domain networks that allow remote access, and creates a firewall rule for public networks that allows remote access only from computers in the same local subnet.On client versions of Windows, Enable-PSRemoting in Windows PowerShell 3.0 creates firewall rules for private and domain networks that allow unrestricted remote access. To create a firewall rule for public networks that allows remote access from the same local subnet, use the SkipNetworkProfileCheckparameter.On client or server versions of Windows, to create a firewall rule for public networks that removes the local subnet restriction and allows remote access , use the Set-NetFirewallRule cmdlet in the NetSecurity module to run the following command: Set-NetFirewallRule -Name "WINRM-HTTP-In-TCP-PUBLIC" -RemoteAddress Any
  • In Windows PowerShell 2.0, Enable-PSRemoting creates the following firewall exceptions for WS-Management communications.On server versions of Windows, it creates firewall rules for all networks that allow remote access.On client versions of Windows, Enable-PSRemoting in Windows PowerShell 2.0 creates a firewall exception only for domain and private network locations. To minimize security risks, Enable-PSRemoting does not create a firewall rule for public networks on client versions of Windows. When the current network location is public, Enable-PSRemoting returns the following message: “Unable to check the status of the firewall.”
  • Beginning in Windows PowerShell 3.0, Enable-PSRemoting enables all session configurations by setting the value of the Enabled property of all session configurations (WSMan:\<ComputerName>\Plugin\<SessionConfigurationName>\Enabled) to True ($true).
  • In Windows PowerShell 2.0, Enable-PSRemoting removes the Deny_All setting from the security descriptor of session configurations. In Windows PowerShell 3.0, Enable-PSRemoting removes the Deny_All and Network_Deny_All settings, thereby providing remote access to session configurations that were reserved for local use.
Enable-PSRemoting -Force

This command configures the computer to receive remote commands. It uses the Force parameter to suppress the user prompts.

Enable-PSRemoting -SkipNetworkProfileCheck -Force

Set-NetFirewallRule -Name "WINRM-HTTP-In-TCP-PUBLIC" -RemoteAddress Any

This example shows how to allow remote access from public networks on client versions of Windows. Before using these commands, analyze the security setting and verify that the computer network will be safe from harm.

The first command enables remoting in Windows PowerShell. By default, this creates network rules that allow remote access from private and domain networks. The command uses the SkipNetworkProfileCheckparameter to allow remote access from public networks in the same local subnet. The command uses the Force parameter to suppress confirmation messages.

The SkipNetworkProfileCheck parameter has no effect on server version of Windows, which allow remote access from public networks in the same local subnet by default.

The second command eliminates the subnet restriction. The command uses the Set-NetFirewallRule cmdlet in the NetSecurity module to add a firewall rule that allows remote access from public networks from any remote location, including locations in different subnets.


Enables remoting on client versions of Windows when the computer is on a public network. This parameter enables a firewall rule for public networks that allows remote access only from computers in the same local subnet.

This parameter has no effect on server versions of Windows, which, by default, have a local subnet firewall rule for public networks. If the local subnet firewall rule is disabled on a server version of Windows, Enable-PSRemoting re-enables it, regardless of the value of this parameter.

To remove the local subnet restriction and enable remote access from all locations on public networks, use the Set-NetFirewallRule cmdlet in the NetSecurity module.

How to Run PowerShell Commands on Remote Computers

PowerShell Remoting lets you run PowerShell commands or access full PowerShell sessions on remote Windows systems. It’s similar to SSH for accessing remote terminals on other operating systems.

PowerShell is locked-down by default, so you’ll have to enable PowerShell Remoting before using it. This setup process is a bit more complex if you’re using a workgroup instead of a domain—for example, on a home network—but we’ll walk you through it.

Enable PowerShell Remoting on the PC You Want to Access Remotely

Your first step is to enable PowerShell Remoting on the PC to which you want to make remote connections. On that PC, you’ll need to open PowerShell with administrative privileges.

-In Windows 10, press Windows+X and then choose PowerShell (Admin) from the Power User menu.

-In Windows 7 or 8, hit Start, and then type “powershell.” Right-click the result and choose “Run as administrator.”

-In the PowerShell window, type the following cmdlet (PowerShell’s name for a command), and then hit Enter:

Enable-PSRemoting -Force

This command starts the WinRM service, sets it to start automatically with your system, and creates a firewall rule that allows incoming connections. The -Force part of the cmdlet tells PowerShell to perform these actions without prompting you for each step.

If your PCs are part of a domain, that’s all the setup you have to do. You can skip on ahead to testing your connection. If your computers are part of a workgroup—which they probably are on a home or small business network—you have a bit more setup work to do.

Note: Your success in setting up remoting in a domain environment depends entirely on your network’s setup. Remoting might be disabled—or even enabled—automatically by group policy configured by an admin. You might also not have the permissions you need to run PowerShell as an administrator. As always, check with your admins before you try anything like this. They might have good reasons for not allowing the practice, or they might be willing to set it up for you.

Set Up Your Workgroup

If your computers aren’t on a domain, you need to perform a few more steps to get things set up. You should have already enabled Remoting on the PC to which you want to connect, as we described in the previous section.

Note: For PowerShell Remoting to work in a workgroup environment, you must configure your network as a private, not public, network.

Next, you need to configure the TrustedHosts setting on both the PC to which you want to connect and the PC (or PCs) you want to connect from, so the computers will trust each other. You can do this in one of two ways.

If you’re on a home network where you want to go ahead and trust any PC to connect remotely, you can type the following cmdlet in PowerShell (again, you’ll need to run it as Administrator).

Set-Item wsman:\localhost\client\trustedhosts *

The asterisk is a wildcard symbol for all PCs. If instead you want to restrict computers that can connect, you can replace the asterisk with a comma-separated list of IP addresses or computer names for approved PCs.

After running that command, you’ll need to restart the WinRM service so your new settings take effect. Type the following cmdlet and then hit Enter:

Restart-Service WinRM

And remember, you’ll need to run those two cmdlets on the PC to which you want to connect, as well as on any PCs you want to connect from.

Test the Connection

Now that you’ve got your PCs set up for PowerShell Remoting, it’s time to test the connection. On the PC you want to access the remote system from, type the following cmdlet into PowerShell (replacing “COMPUTER” with the name or IP address of the remote PC), and then hit Enter:


This simple command tests whether the WinRM service is running on the remote PC. If it completes successfully, you’ll see information about the remote computer’s WinRM service in the window—signifying that WinRM is enabled and your PC can communicate. If the command fails, you’ll see an error message instead.

Execute a Single Remote Command

To run a command on the remote system, use the Invoke-Command cmdlet using the following syntax:

Invoke-Command -ComputerName COMPUTER -ScriptBlock { COMMAND } -credential USERNAME

“COMPUTER” represents the remote PC’s name or IP address. “COMMAND” is the command you want to run. “USERNAME” is the username you want to run the command as on the remote computer. You’ll be prompted to enter a password for the username.

Here’s an example. I want to view the contents of the C:\ directory on a remote computer with the IP address I want to use the username “wjgle,” so I would use the following command:

Invoke-Command -ComputerName -ScriptBlock { Get-ChildItem C:\ } -credential wjgle

Start a Remote Session

If you have several cmdlets you want to run on the remote PC, instead of repeatedly typing the Invoke-Command cmdlet and the remote IP address, you can start a remote session instead. Just type the following cmdlet and then hit Enter:

Enter-PSSession -ComputerName COMPUTER -Credential USER

Again, replace “COMPUTER” with the name or IP address of the remote PC and replace “USER” with the name of the user account you want to invoke.

Your prompt changes to indicate the remote computer to which you’re connected, and you can execute any number of PowerShell cmdlets directly on the remote system.


Enable-PSRemoting configures a computer to receive PowerShell remote commands sent with WS-Management technology.

To run this cmdlet, start PowerShell with the “Run as administrator” option.

PS Remoting only needs to be enabled once on each computer that will receive commands.

Computers that only send commands do not need to have PS Remoting enabled; because the configuration activates listeners (and starts the WinRM service), it is prudent to run it only where needed.

To run a command on the remote system, use Invoke-Command or Enter-PSSession for multiple commands.

If your computers aren’t on a domain, you’ll need to perform the following extra steps:

On both computers:

Configure the TrustedHosts setting so the computers will trust each other:

Set-Item WSMan:\localhost\client\trustedhosts PC64,PC65,PC66

The comma-separated list can be IP addresses or computer names or even a * wildcard to match all.

run : Restart-Service WinRM

To view the current trusted hosts:
Get-Item WSMan:\localhost\Client\TrustedHosts


Configure the local computer to receive remote commands:

PS C:\> Enable-PSRemoting

Configure the computer to receive remote commands & suppress user prompts:

PS C:\> Enable-PSRemoting -Force

Configure the remote computer workstation64 to receive remote commands, via psexec. If you are running this from an account which is NOT a domain administrator, then specify the username/password of an account with admin rights to the remote machine:

PS C:\> psexec \\PC64 -u adminUser64 -p pa$$w0rd -h -d powershell.exe "enable-psremoting -force"

Test that the computer computer64 is setup to receive remote commands:

PS C:\> Test-WsMan PC64

Run a single command on the remote computer using Invoke-Command:

PS C:\> Invoke-Command -ComputerName PC64 -ScriptBlock { Get-ChildItem C:\ } -credential jdoe

Run multiple commands by starting a Remote PowerShell Session:

PS C:\> Enter-PSSession -ComputerName PC64 -Credential AshleyT

“He who lies hid in remote places is a law unto himself” ~ Publilius Syrus

Related PowerShell Commands:

Enter-PSSession – Start an interactive session with a remote computer.

Disable-PSRemoting – Prevent remote users from running commands on the local computer.

Test-WSMan – Test if a computer is setup to receive remote commands via the WinRM service.

Invoke-Command – Run commands on local and remote computers.

WINRM – Windows Remote Management

Question: PowerShell: Configure WinRM and enable PSRemoting

1 – Enable WinRM

First thing to do before starting to manage your server remotely is to enable this function in your server. For this, you need to use the Windows Remote Management (WinRM) service. WinRM is the service which will allow you to use the WS-Management protocol necessary for the PowerShell remoting.

Enable WinRM is quite simple to do, you just need to run this command in a PowerShell prompt:

Winrm quickconfig or winrm qc

It should display a message like this if it is already configured:

Image and video hosting by TinyPic

Otherwise it will ask you to configure it:

Image and video hosting by TinyPic

2 – Enable PSRemoting

Once you have started your WinRM service, you must configure PowerShell itself to allow the remoting:


Image and video hosting by TinyPic

3 – TrustedHosts file configuration

3.1 – Add server to the TrustedHosts file

The configuration above implies a domain environment. If you are working with servers which are not in your domain or in a trusted domain, you will have to add them in the TrustedHosts list of your local server. To do so, you must run the command below:

winrm s winrm/config/client ‘@{TrustedHosts=”MyServerName”}’

And the result you should see (you just need to replace “MyServerName” by the name of your server):

Image and video hosting by TinyPic

Another way to add a server to this file, by using the Set-Item cmdlet, like below:

Set-Item WSMan:\localhost\Client\TrustedHosts –Value “MyServerName,MyServerName2”

Image and video hosting by TinyPic

In the command above you can see that I added two values between the quotes “ “. If you want to add more than one server to this file, you must add them separated by a coma. Attention anyway, if one day you decide to add a new server, if you run the same command with only one server name, it will overwrite the existing file. You need to add all the server names’ that must be in this file.

PowerShell will also prompt you to warn about the risks of adding a computer which is not trustworthy in this file.

And if I do a Get-Item, I should see my two servers:

Get-Item WSMan:\localhost\Client\TrustedHosts |fl

Image and video hosting by TinyPic

If you want to trust every servers which are not in your domain, even if it far far… far away from being secure… you can use the wildcard, like that:

Set-Item WSMan:\localhost\Client\TrustedHosts -Value “*”

Image and video hosting by TinyPic

And the result:

Get-Item WSMan:\localhost\Client\TrustedHosts |fl Name, Value

Image and video hosting by TinyPic

And of course, sometimes it can also be interesting to be able to check this TrustedHosts file to see what is inside. You can also use PowerShell to it by using the Get-Item cmdlet:

Get-Item WSMan:\localhost\Client\TrustedHosts

3.2 – Remove servers from the TrustedHosts file

While you can easily add servers to your TrustedHosts file it can also be interesting to be able to remove a server from it, for security reasons, if you don’t need to use it anymore.

To do so, there are two different ways…

Clear the whole file, by using the command below:

Clear-Item -Path WSMan:\localhost\Client\TrustedHosts –Force

Or, by only replacing one value by an empty value, with the command below:

$newvalue = ((Get-ChildItem WSMan:\localhost\Client\TrustedHosts).Value).Replace(“MyServerName1,”,””)

Set-Item WSMan:\localhost\Client\TrustedHosts $newvalue

Image and video hosting by TinyPic

And by using this command, we can remove one server but still keeping the other servers in the list. As we can see on the output below:

Image and video hosting by TinyPic

Before clearing:

Image and video hosting by TinyPic

And after:

Image and video hosting by TinyPic

And there we are! Your PowerShell is now configured to handle the remote management.

Question: PowerShell Remoting Cheatsheet

I have become a big fan of PowerShell Remoting. I find my self using it for both penetration testing and standard management tasks. In this blog I’ll share a basic PowerShell Remoting cheatsheet so you can too.

Enabling PowerShell Remoting

Before we get started let’s make sure PowerShell Remoting is all setup on your system.

  1. In a PowerShell console running as administrator enable PowerShell Remoting.Enable-PSRemoting –forceThis should be enough, but if you have to troubleshoot you can use the commands below
  2. Make sure the WinRM service is setup to start automatically.# Set start mode to automatic Set-Service WinRM -StartMode Automatic # Verify start mode and state - it should be running Get-WmiObject -Class win32_service | Where-Object {$ -like "WinRM"}
  3. Set all remote hosts to trusted. Note: You may want to unset this later. # Trust all hosts Set-Item WSMan:localhost\client\trustedhosts -value * # Verify trusted hosts configuration Get-Item WSMan:\localhost\Client\TrustedHosts

Executing Remote Commands with PowerShell Remoting

  • Executing a Single Command on a Remote SystemThe “Invoke-Command” command can be used to run commands on remote systems.  It can run as the current user or using alternative credentials from a non domain system.  Examples below.Invoke-Command –ComputerName MyServer1 -ScriptBlock {Hostname} Invoke-Command –ComputerName MyServer1 -Credential demo\serveradmin -ScriptBlock {Hostname} If the ActiveDirectory PowerShell module is installed it’s possible to execute commands on many systems very quickly using the pipeline. Below is a basic example.Get-ADComputer -Filter *  -properties name | select @{Name="computername";Expression={$_."name"}} | Invoke-Command -ScriptBlock {hostname} Sometimes it’s nice to run scripts stored locally on your system against remote systems.  Below are a few basic examples.Invoke-Command -ComputerName MyServer1 -FilePath C:\pentest\Invoke-Mimikatz.ps1 Invoke-Command -ComputerName MyServer1 -FilePath C:\pentest\Invoke-Mimikatz.ps1 -Credential demo\serveradmin Also, if your dynamically generating commands or functions being passed to remote systems you can use invoke-expression through invoke-command as shown below.$MyCommand = "hostname" $MyFunction = "function evil {write-host `"Getting evil...`";iex -command $MyCommand};evil" invoke-command -ComputerName MyServer1 -Credential demo\serveradmin -ScriptBlock {Invoke-Expression -Command "$args"} -ArgumentList $MyFunction
  • Establishing an Interactive PowerShell Console on a Remote SystemAn interactive PowerShell console can be obtained on a remote system using the “Enter-PsSession” command.  It feels a little like SSH.  Similar to “Invoke-Command”, “Enter-PsSession” can be run as the current user or using alternative credentials from a non domain system.  Examples below.Enter-PsSession –ComputerName Enter-PsSession –ComputerName –Credentials domain\serveradmin If you want out of the PowerShell session the “Exit-PsSession” command can be used.Exit-PsSession
  • Creating Background SessionsThere is another cool feature of PowerShell Remoting that allows users to create background sessions using the “New-PsSession” command.  Background sessions can come in handy if you want to execute multiple commands against many systems.  Similar to the other commands, the “New-PsSession” command can run as the current user or using alternative credentials from a non domain system.  Examples below.New-PSSession -ComputerName New-PSSession –ComputerName –Credentials domain\serveradmin If the ActiveDirectory PowerShell module is installed it’s possible to create background sessions for many systems at a time (However, this can be done in many ways).  Below is a command example showing how to create background sessions for all of the domain systems.  The example shows how to do this from a non domain system using alternative domain credentials.New-PSDrive -PSProvider ActiveDirectory -Name RemoteADS -Root "" -Server a.b.c.d -credential domain\user cd RemoteADS: Get-ADComputer -Filter * -Properties name  | select @{Name="ComputerName";Expression={$_."name"}} | New-PSSession
  • Listing Background SessionsOnce a few sessions have been established the “Get-PsSession” command can be used to view them.Get-PSSession
  • Interacting with Background SessionsThe first time I used this feature I felt like I was working with Metasploit sessions, but these sessions are a little more stable. Below is an example showing how to interact with an active session using the session id.Enter-PsSession –id 3 To exit the session use the “Exit-PsSession” command. This will send the session into the background again.Exit-PsSession
  • Executing Commands through Background SessionsIf your goal is to execute a command on all active sessions the “Invoke-Command” and “Get-PsSession” commands can be used together. Below is an example.Invoke-Command -Session (Get-PSSession) -ScriptBlock {Hostname}
  • Removing Background SessionsFinally, to remove all of your active sessions the “Disconnect-PsSession” command can be used as shown below.Get-PSSession | Disconnect-PSSession

Wrap Up

Naturally PowerShell Remoting offers a lot of options for both administrators and penetration testers. Regardless of your use case I think it boils down to this:

  • Use “Invoke-Command” if you’re only going to run one command against a system
  • Use “Enter-PSSession” if you want to interact with a single system
  • Use PowerShell sessions when you’re going to run multiple commands on multiple systems

Hopefully this cheatsheet will be useful. Have fun and hack responsibly.

Question: Enable PSRemoting Remotely

So it’s been an interesting week for me at work as we brought a new customer online.  It’s really great to be working with a dynamic team in a rapidly evolving environment.  One of the things that’s keeping us ahead of the game is relying on PowerShell when performing repetitive tasks.  In this week’s article I’m going to talk about a set of functions I had to come up this week to start PSRemoting remotely.

I’d seen a bunch of postings where people used Schtasks.exe and or PSExec to enable PSRemoting but I didn’t like either of those approaches.  I wanted to do it in a more native powershell way.  I got a lot of help from Thomas Lee’s blog where he talked about writing registry keys remotely using powershell.

From there I went on to write a set of functions, 5 total, that will perform all the functions required to enable PSRemoting.  In order to accomplish the configuration for the WinRM service and the windows firewall remotely I had the functions write entries in the policy node of the registry.

So I’ll stop talking and get onto the functions.

All of them can be downloaded here

Set-WinRMListener, works by creating 3 registry keys that configure the WinRM service when it restarts.

Restart-WinRM, Uses Get-WmiObject to start and stop the WinRM service.

Set-WinRMStartup, sets the startup type of the WinRM service to automatic.

Set-WinRMFirewallRule, creates 2 registry keys to configure the firewall exemptions required by PSRemoting.

Restart-WindowsFirewall, restarts the windows firewall service to allow the registry configurations to take hold.

Anyway, all the functions are defined in the script on the TechNet gallery.  I hope you guys like the functions and get some use out of them.  Go ahead and leave a comment or email me if you’re interested in further explanation.  Also, feel free to leave comments on the TechNet entry.

Question: Enable-PSRemoting

The Enable-PSRemoting cmdlet configures the computer to receive Windows PowerShell remote commands that are sent by using the WS-Management technology.

By default, on Windows Serverr 2012, Windows PowerShell remoting is enabled. You can use Enable-PSRemoting to enable Windows PowerShell remoting on other supported versions of Windows and to re-enable remoting on Windows Server 2012 if it becomes disabled.

You have to run this command only one time on each computer that will receive commands. You do not have to run it on computers that only send commands. Because the configuration starts listeners, it is prudent to run it only where it is needed.

Beginning in Windows PowerShell 3.0, the Enable-PSRemoting cmdlet can enable Windows PowerShell remoting on client versions of Windows when the computer is on a public network. For more information, see the description of the SkipNetworkProfileCheck parameter.

The Enable-PSRemoting cmdlet performs the following operations:

— Runs the Set-WSManQuickConfig cmdlet, which performs the following tasks:

—– Starts the WinRM service.

—– Sets the startup type on the WinRM service to Automatic.

—– Creates a listener to accept requests on any IP address, if one does not already exist.

—– Enables a firewall exception for WS-Management communications.

—– Registers the Microsoft.PowerShell and Microsoft.PowerShell.Workflow session configurations, if it they are not already registered.

—– Registers the Microsoft.PowerShell32 session configuration on 64-bit computers, if it is not already registered.

—– Enables all session configurations.

—– Changes the security descriptor of all session configurations to allow remote access.

—– Restarts the WinRM service to make the preceding changes effective.

To run this cmdlet, start Windows PowerShell by using the Run as administrator option.

CAUTION: On systems that have both Windows PowerShell 3.0 and Windows PowerShell 2.0, do not use Windows PowerShell 2.0 to run the Enable-PSRemoting and Disable-PSRemoting cmdlets. The commands might appear to succeed, but the remoting is not configured correctly. Remote commands and later attempts to enable and disable remoting, are likely to fail.

  1. Configure a computer to receive remote commands:PS C:> Enable-PSRemoting This command configures the computer to receive remote commands.
  2. Configure a computer to receive remote commands without a confirmation prompt:PS C:> Enable-PSRemoting -Force This command configures the computer to receive remote commands. It uses the Force parameter to suppress the user prompts.
  3. Allow remote access on clients:PS C:> Enable-PSRemoting -SkipNetworkProfileCheck -Force PS C:> Set-NetFirewallRule -Name "WINRM-HTTP-In-TCP-PUBLIC" -RemoteAddress Any This example shows how to allow remote access from public networks on client versions of the Windows operating system. Before using these commands, analyze the security setting and verify that the computer network will be safe from harm.The first command enables remoting in Windows PowerShell. By default, this creates network rules that allow remote access from private and domain networks. The command uses the SkipNetworkProfileCheck parameter to allow remote access from public networks in the same local subnet. The command specifies the Force parameter to suppress confirmation messages.The SkipNetworkProfileCheck parameter does not affect server version of the Windows operating system, which allow remote access from public networks in the same local subnet by default.The second command eliminates the subnet restriction. The command uses the Set-NetFirewallRule cmdlet in the NetSecurity module to add a firewall rule that allows remote access from public networks from any remote location. This includes locations in different subnets.

REVISE: Hyper-V Resource Control and Resource Metering

Very impressive links on Hyper-V Resource Control and Resource Metering.

Question: Windows Server 2012 and 2012 R2

IT organizations need tools to charge back business units that they support while providing the business units with the right amount of resources to match their needs. For hosting providers, it is equally important to issue chargebacks based on the amount of usage by each customer.

To implement advanced billing strategies that measure both the assigned capacity of a resource and its actual usage, earlier versions of Hyper-V required users to develop their own chargeback solutions that polled and aggregated performance counters. These solutions could be expensive to develop and sometimes led to loss of historical data.

To assist with more accurate, streamlined chargebacks while protecting historical information, Hyper-V in Windows Server 2012 introduces Resource Metering, a feature that allows customers to create cost-effective, usage-based billing solutions. With this feature, service providers can choose the best billing strategy for their business model, and independent software vendors can develop more reliable, end-to-end chargeback solutions on top of Hyper-V.

Key benefits

Hyper-V Resource Metering in Windows Server 2012 allows organizations to avoid the expense and complexity associated with building in-house metering solutions to track usage within specific business units. It enables hosting providers to quickly and cost-efficiently create a more advanced, reliable, usage-based billing solution that adjusts to the provider’s business model and strategy.

Use of network metering port ACLs

Enterprises pay for the Internet traffic in and out of their data centers, but not for the network traffic within their data centers. For this reason, providers generally consider Internet and intranet traffic separately for the purposes of billing. To differentiate between Internet and intranet traffic, providers can measure incoming and outgoing network traffic for any IP address range, by using network metering port ACLs.

Virtual machine metrics

Windows Server 2012 provides two options for administrators to obtain historical data on a client’s use of virtual machine resources: Hyper-V cmdlets in Windows PowerShell and the new APIs in the Virtualization WMI provider. These tools expose the metrics for the following resources used by a virtual machine during a specific period of time:

  • Average CPU usage, measured in megahertz over a period of time.
  • Average physical memory usage, measured in megabytes.
  • Minimum memory usage (lowest amount of physical memory).
  • Maximum memory usage (highest amount of physical memory).
  • Maximum amount of disk space allocated to a virtual machine.
  • Total incoming network traffic, measured in megabytes, for a virtual network adapter.
  • Total outgoing network traffic, measured in megabytes, for a virtual network adapter.

Movement of virtual machines between Hyper-V hosts—for example, through live, offline, or storage migrations—does not affect the collected data.

Question: Introduction to Resource Metering

Hi, I’m Lalithra Fernando, a program manager on the Hyper-V team, working in various areas including clustering and authorization, as well as with our Hyper-V MVPs. In this post, I’ll be talking about resource metering, a new feature in Hyper-V in Windows Server 2012.

As you’ve probably heard by now, Windows Server 2012 is a great platform for the private cloud. When we began planning this release, we realized that one of the things you need in order to run a cloud is to be able to charge your users for the resources they use.

This is the need resource metering fills. It allows you to measure the resource utilization of your virtual machines. You can use this information as a platform for your own dynamic chargeback solutions, where you can charge customers based on the resources they use instead of a flat upfront cost, or to plan your hosting capacity appropriately.

There are four resources that you can measure: your CPU, memory, network, and storage utilization. We measure these resources over the period of time between when you measure and when you last reset metering.

CPU (MHz): We report the average utilization in megahertz.

Now, you’re probably wondering why we don’t report this as a percentage. After all, that’s what we do in Hyper-V Manager. Well, we know that you like to move your virtual machines. With Windows Server 2012, you can live migrate your virtual machines all over the place. Naturally, the record of how much resources your virtual machine has used moves with it.

We want the virtual machine’s CPU utilization to make sense across multiple machines. If we report a percentage and you move the virtual machine to a host with different processing capabilities, it’s no longer clear what the percentage refers to.

Instead, we report in megahertz. For example, if your virtual machine had an average CPU utilization of 50% over the past billing cycle on a host with a CPU running at 3GHz, we would report 1500MHz.

If your virtual machine spent one hour on a host with a 3GHz CPU and used 50% and another hour on a host with 1GHz CPU and used 75%, we would report the following:

(3GHz * 1000MHz/1GHz * .5 * 1hr) + (1GHz * 1000MHz/1GHz * .75 * 1hr) = 2250MHz-Hr

Here I am converting the CPU capacity from GHz to MHz and figuring out how much of that capacity was used over each hour.

2250MHz-Hr / 2 Hours = 1125 MHz.

Then, I simply divide over the two hours to get this value.

One final note: we don’t report minimum and maximum utilization values for CPU. If you think on it a moment, you’ll come to the same realization we did: it is very likely that the minimum will be 0 and the maximum will be the full capacity of the hosts’ CPU at some point over the timespan you’re measuring. Since that’s not very useful, we don’t report it.

Memory (MB): We report the average, maximum, and minimum utilization here, in megabytes.

The minimum memory utilization captures the least memory used over the timespan measured. Since it’s not very useful to know that the minimum memory usage was zero if the virtual machine was ever turned off, we only look at the minimum memory utilization when the virtual machine is running.

We do include the offline time of the virtual machine when calculating the average memory utilization. This provides an accurate view of how much memory the virtual machine was using over that billing cycle, so that you can charge your users accurately.

Network (MB): We report network utilization in megabytes. Of course, we want this metric to be useful, so we considered how you would want to see this information broken down. One way you might want to distinguish between network traffic is to see how much traffic is inbound to the virtual machine, and how much is outbound.

The most important breakdown you will want is how much traffic does the virtual machine send or receive from the internet, which costs you money, and how much is just communication between virtual machines you host, which costs you nothing since it is just using your intranet. With this breakdown, you can charge your user accurately for their internet usage.

So how do we provide these breakdowns? We use ACLs set on the virtual machine’s network adapter. Each ACL has

  • Direction
    • “Inbound” or “Outbound”
  • Remote IP Address
    • The source or destination of the network packet, depending on direction
    • For example,
  • Action
    • Allow, Deny, or Meter

These ACLs are used for more than just resource metering; note the Allow and Deny actions. For our purposes, you set the action to “Meter”.

Enabling resource metering creates two sets of default metering ACLs, provided none are already configured. The first set of ACLs, one inbound and one outbound, has a remote IP address of *.*; this wildcard notation indicates that it will meter all IPv4 traffic that is not covered by another ACL. The second set has an IP address of *:*, which meters all IPv6 traffic.

With these metering ACLs, you can measure the total network traffic sent and received by the virtual machine, in megabytes. You can configure your own ACLs to count intranet traffic separately from internet traffic, and charge accordingly.

Disk (MB): As we spoke with customers, we realized that for chargeback purposes, they were only interested in the total disk allocation for a virtual machine. So, here we report that in megabytes.

The total value is the capacity (not the current size on disk) of the VHDs attached to the virtual machine plus the size of the snapshots. Take the following examples:

Fixed size disk: 

VM with a single 100GB fixed size VHDs attached


Total Disk Allocation reported: 100GB

Dynamic disk:

VM with a single dynamic VHD attached, 

Current size 30GB, maximum size 100GB


Total Disk Allocation reported: 100GB

With snapshots:

VM with a single dynamic size VHDs attached,

Current size 30GB, maximum size 100GB,

Plus a 20GB snapshot


Total Disk Allocation reported: 120GB

Pass-through disks, DAS disks, guest iSCSI connections, and virtual Fibre Channel disks are not included in the total disk allocation metric.

Once you enable resource metering, Hyper-V will begin collecting data. You can reset metering at any time. We will then discard the data we have collected up to that point and start fresh. So, you will typically measure the utilization first, and then reset. When you measure, you are measuring the utilization over the timespan since you last reset metering. Metering is designed to collect this data over long periods of time. If you need greater granularity, you can look at performance counters. 

Having resource metering enabled and just capturing utilization data per your billing cycle has no noticeable performance impact. There will be some negligible disk and CPU activity as data is written to the configuration file.

You can try this all out for yourself now, with Windows Server 2012. In the next part, we’ll talk about how to actually use resource metering with our PowerShell cmdlets.

We hope this is useful for you. Please let us know how you’re using it! Thanks!

Question: Configuring Hyper-V Resource Metering

Windows Server 2012 Hyper-V contains a resource metering mechanism that makes it possible to track system resource usage either for a virtual machine or for a collection of virtual machines. Doing so can help you to keep track of the resources consumed by virtual machine collections. This information could be used to facilitate chargebacks (although Hyper-V does not contain a native chargeback mechanism).

Resource metering is not enabled by default. You can enable resource metering through PowerShell by entering the following command:

Get-VM <virtual machine name> |  Enable-VMResourceMetering

By default, Hyper-V collects resource metering statistics once every hour. You can change the collection frequency, but it is a good idea to avoid collecting metering data too frequently because doing so can impact performance and generate an excessive amount of metering data. If you want to change the collection frequency you can do so by using this command:

Set-VMHost –ComputerName <host server name>  -ResourceMeteringSaveInterval <HH:MM:SS>

As you look at the command above, you will notice that the collection frequency is being set at the host server level. You cannot adjust the frequency on a per VM basis. You can see what this command looks like in figure 1.

[Click on image for larger view.]Figure 1. You can change the resource metering collection frequency.

When you enable resource metering, there are a number of different resource usage statistics that are compiled. These statistics include:

  • The average CPU usage (measured in MHz)
  • The average physical memory usage (measured in MB)
  • The minimum memory usage (measured in MB)
  • The maximum memory usage (measured in MB)
  • The maximum amount of disk space allocated to a VM
  • The total inbound network traffic (measured in MB)
  • The total outbound network traffic (measured in MB)

The easiest way to view a virtual machine’s resource usage is to enter the following command:

Get-VM <virtual machine name> | Measure-VM

This command will display all of the available metering data for the virtual machine that you have specified.

Similarly, resource metering data can be displayed for all of the virtual machines that are running on the host server. If you want to see monitoring data for all of the virtual machines, you can acquire it by running this command:

Get_VM | Measure-VM

You can see what the output looks like in figure 2.

[Click on image for larger view.]Figure 2. This is what the resource metering output looks like.

Often times administrators are interested in specific aspects of resource consumption. For example, if a particular host server had limited network bandwidth available then an administrator would probably be interested in seeing the amount of network traffic that each virtual machine was sending and receiving. Conversely, if that same server had far more processing power than what would ever be needed by the virtual machines that are running on it, then the administrator probably would not need to monitor the average CPU usage.

Although you cannot turn data collection on or off for individual statistics, you can configure PowerShell to display only the statistics that you are interested in. The key to doing so is to know the object names that PowerShell assigns to each statistic. You can see the object names by entering the following command:

Get-VM | Measure-VM | Select-Object *

The column on the left side of the output lists the names that PowerShell uses for the individual statistics. You can see what this looks like in figure 3.

[Click on image for larger view.]Figure 3. You can get the object names from the column on the left.

There are a couple of things that you might have noticed in the figure above. First, there are more objects than what are displayed by default. Second, there are more objects than what I listed earlier. The reason for this is that these screen captures came from a server running Windows Server 2012 R2 Preview. Microsoft is extending the Resource Metering feature in Hyper-V 2012 R2 to include additional metering objects. In this article however, I only listed the objects that are available today.

With that in mind, let’s suppose that you only wanted to list the maximum memory consumption for each VM. You could do so by using this command:

Get-VM | Measure-VM | Select-Object VMName, MaxRAM

You can see the output in figure 4. Keep in mind that you can adapt this command to display any combination of objects that you want.

[Click on image for larger view.]Figure 4. PowerShell can display specific resource metering data.

As you can see, resource metering is useful for tracking resource consumption. It can also be useful for performing chargebacks, although there is no native Hyper-V chargeback mechanism.

Question: Resource Metering in Hyper-V

Why is Hyper-V Resource Metering Important?

The American National Institute of Standards and Technology gives us one of the best definitions of a cloud in their Special Publication 800-145, entitled “The NIST Definition of Cloud Computing.” In this document they describe a cloud as having five essential characteristics. One of the traits that they describe as being necessary to have a cloud is a measured service:

Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

What does this mean? A cloud enables a tenant to consume just what they need and pay for what they use. The cloud must be able to measure that usage. Using this information, a cloud vendor can charge the tenant for their resource usage.

Hyper-V resource metering

Resource metering is an important part of any Hyper-V deployment. (Image: Dreamstime)

That’s fine for a hosting company. What about in a private cloud, that is, an infrastructure that runs on premise? Traditionally the IT department is run as a cost center, or as the board of directors unfortunately see it, as a budgetary black hole. IT can change this incorrect perception in one of two ways:

  • Cross-charging: Every department that consumes IT services and resources will be given their own IT budget to spend with the IT department. IT will provide those services and resources, and invoice their internal customers on a regular basis in a non-profit manner. This changes IT into a service organization.
  • Show-back reporting: Many organizations will never consider changing IT into a chargeable service. However, IT can show the business leaders the cost of providing services to each of the business groups by reporting usage and translating that usage into a monetary value. This is a company politics move that can change the perception of IT within the business.

In my opinion, these sorts of actions could misfire and lead to talk of out-sourcing and off-shoring, so be careful!

What Information is Collected?

Resource metering will collect the following data for each enabled virtual machine:

  • Average CPU usage in MHz
  • Average physical memory usage
  • Minimum physical memory usage
  • Maximum memory usage
  • Maximum physical amount of disk
  • Total incoming network traffic
  • Total outgoing network traffic

Note that in the case of dynamic virtual hard disks, the potential size, not the actual size, is reported for maximum physical amount of disk. Also note that all data is stored with the virtual machine and moves with the virtual machine as it migrates between hosts.

Enabling and Using Hyper-V Resource Metering

Resource metering is made available using PowerShell. You can write scripts using PowerShell or you can use other tools to leverage the functionality.

You must first enable metering on a per-virtual machine basis. The following snippet will enable metering on all virtual machines on a host:


1Get-VM -ComputerName Demo-Host2 | Enable-VMResourceMetering

Tip: Remember to enable metering on any virtual machine created afterwards because the above cmdlet will only affect existing virtual machines.

By default, resource metering will collect metrics every hour. This is based on a per-host setting called ResourceMeteringSaveInterval. You might want to change this setting to match your billing rate in a cloud. If you are just testing resource metering, then you might want a more frequent collection. This example will change the setting to every 10 seconds:


1Set-VMHost –ComputerName Demo-Host2 –ResourceMeteringSaveInterval 00:00:10

After the resource metering interval has passed, you will want to collect some metrics. Here’s a quick way to see all collected data:


1Get-VM -ComputerName Demo-Host2 | Measure-VM
Resource Metering tracks several different metrics for Hyper-V virtual machines

Measuring resource usage by Hyper-V virtual machines. (Image: Aidan Finn)

You could do something more targeted:


1Measure-VM -ComputerName Demo-Host2 -Name VM01

You can get a breakdown of bandwidth usage using:


1(Measure-VM -ComputerName Demo-Host2 -Name VM01).NetworkMeteredTrafficReport

Armed with this information, it won’t take you too long to find resource hogs on your hosts:


1Get-VM -ComputerName Demo-Host2 | Measure-VM | Sort-Object -Property AverageProcessorUsage -Descending | Select-Object -First 3 –Property VMName,AverageProcessorUsage
Identify top offenders on your Hyper-V hosts using Resource Metering

Reporting on the top resource consumers. (Image: Aidan Finn)

Resource metering is a tool that can show the value of IT to the business or enable a service provider to earn revenue. And with this data, you even have some ability to track usage for diagnostics reasons.


IT Professionals need tools to track usage from specific business units. If you search you can find lot of monitoring tolls that can do this job but most of them need to pay and for free open source need an advance knowledge to install, configure and enable measure metrics for the resources that you want. 

I don’t want to say don’t use Monitoring Tool in your enviroment. But it takes times and need people to do this. If you are alone you need a quick solution until decide your Monitoring Solution in your enviroment. 

So today in this article i will show you another feature that was introduced in Windows Server 2012 Hyper-V that isn’t immediately obvious and is driven by using Windows PowerShell. I will explain only basic commands that can use it every day to measure metrics of your VM’S. The feature is amazing and it’s sure that i will come back with more advance commands of Resource Metering.

Resource Metering  expose the metrics for the following resources used by a virtual machine during a specific period of time:

  • Average CPU usage, measured in megahertz over a period of time.
  • Average physical memory usage, measured in megabytes.
  • Minimum memory usage (lowest amount of physical memory).
  • Maximum memory usage (highest amount of physical memory).
  • Maximum amount of disk space allocated to a virtual machine.
  • Total incoming network traffic, measured in megabytes, for a virtual network adapter.
  • Total outgoing network traffic, measured in megabytes, for a virtual network adapter.

 Let’s start to explain with practise.

  • As usuall open a Powershell as Administrator always.
  • First we must enable Resource Metering in VM. So Type
    Enable-VMResourceMetering –VMName WIN2012X64

  • If you want to verify that Resource Metering is enable in the VM TYPE
    Get-VM –VMName WIN2012X64| Format-Table Name, ResourceMeteringEnabled

  • Let’s see the resource metrics that we get from the VM
    Measure-VM –VMName WIN2012X64

  • Let’s see more details for the VM.
    Measure-VM –VMName WIN2012X64 | Format-List

  • If you want to measure network traffic 
    (Measure-VM –VMName WIN2012X64).NetworkMeteredTrafficReport

This is my last article for the 2015. I will come back with new Articles and Tutorials in 2016.

Merry Christmas and a Happy New Year!!!!!


With Windows Server 2012 Hyper-V Microsoft introduced a new feature in Hyper-V called Resource Metering which allows you to measure the usage of a virtual machine. This allows you to track CPU, Memory, Disk and network usage. This is a great feature especially if you need to do charge back or maybe even for trouble shooting.

Last week I had the chance to test and implement this feature for a customer.

First you can check the available PowerShell cmdlets for Hyper-V or for the the commands which include VMResourceMetering.

123Get-Command -Module Hyper-V Get-Command *VMResourceMetering*
Get-Command VMResourceMetering

The resource metering has to be enabled per Virtual Machine. This is great, so even if you move the virtual machine from one Hyper-V host to another you still have the usage data.

To enable the resource metering you can use the following cmdlet. In my case I enable VM Resource Metering for my VM called SQL2012.

1Get-VM SQL2012 | Enable-VMResourceMetering

With the cmdlet Measure-VM you can get the statistic for the VM.

123Measure-VM -VMName SQL2012 Get-VM SQL2012 | Measure-VM | select *

To get the network traffic use the properties of the NetworkMeteredTrafficReport.

1(Measure-VM -VMName SQL2012).NetworkMeteredTrafficReport
Measure-VM NetworkMeteredTrafficReport

Here is another great thing, if you want to measure Network from or to a specific network you can use VM Network Adapter ACLs to do so. With ACLs you can not just allow or deny network traffic, you can also meter network traffic for a special subnet or IP address.

1Add-VMNetworkAdapterAcl -VMName SQL2012 -Action Meter -RemoteIPAddress -Direction Outbound
add-vmnetworkadapteracl measure-vm

Of course you can reset the statistics for the VM.

1Get-VM SQL2012 | Reset-VMResourceMetering

And to disable resource metering for the VM use:

1Get-VM SQL2012 | Disable-VMResourceMetering

I think this is one of the great new features of Windows Server 2012 Hyper-V which gets not a lot of attention but is really important.

REVISE: Time Synchronization in Windows Hyper-V

Below links give a detailed clarification on how time clock of guest operating systems in windows hyper-v works and how to solve its associated problems.

Question: Time Synchronization in Hyper-V

There is a lot of confusion about how time synchronization works in Hyper-V – so I wanted to take the time to sit down and write up all the details. 

There are actually multiple problems that exist around keeping time inside of virtual machines – and Hyper-V tackles these problems in different ways.

Problem #1 – Running virtual machines lose track of time.

While all computers contain a hardware clock (called the RTC – or real-time clock) most operating systems do not rely on this clock.  Instead they read the time from this clock once (when they boot) and then they use their own internal routines to calculate how much time has passed.

The problem is that these internal routines make assumptions about how the underlying hardware behaves (how frequently interrupts are delivered, etc…) and these assumptions do not account for the fact that things are different inside a virtual machine.  The fact that multiple virtual machines need to be scheduled to run on the same physical hardware invariably results in minor differences in these underlying systems.  The net result of this is that time appears to drift inside of virtual machines.

UPDATE 11/22: One thing that you should be aware of here: the rate at which the time in a virtual machine drifts is affected by the total system load of the Hyper-V server.  More virtual machines doing more stuff means time drifts faster.

In order to deal with time drift in a virtual machine – you need to have some process that regularly gets the real time from a trusted source and updates the time in a virtual machine.

Hyper-V provides the time synchronization integration services to do this for you.  The way it does this is by getting time readings from the management operating system and sending them over to the guest operating system.  Once inside the guest operating system – these time readings are then delivered to the Windows time keeping infrastructure in the form of an Windows time provider (you can read more about this here:   These time samples are correctly adjusted for any time zone difference between the management operating system and the guest operating system.

Problem #2 – Saved virtual machines / snapshots have the wrong time when they are restored.

When we restore a virtual machines from a saved state or from a snapshot we put back together the memory and run state of the guest operating system to exactly match what it was when the saved state / snapshot was taken.  This includes the time calculated by the guest operating system.  So if the snapshot was taken one month ago – the time and date will report that it is still one month ago.

Interestingly enough, at this point in time we will be reporting the correct (with some caveats) time in the systems RTC.  But unfortunately the guest operating system has no idea that anything significant has happened – so it does not know to go and check the RTC and instead continues with its own internally calculated time.

To deal with this the Hyper-V time synchronization integration service detects whenever it has come back from a saved state or snapshot, and corrects the time.  It does this by issuing a time change request through the normal user mode interfaces provided by Windows.  The effect of this is that it looks just like the user sat down and changed the time manually.  This method also correctly adjusts for time zone differences between the management operating system and the guest operating system.

Problem #3 – There is no correct “RTC value” when a virtual machine is started

As I have mentioned – physical computers have a RTC that operating systems look at when they first boot to get the time.  This real-time clock is backed by a small battery (you have probably seen the battery yourself if you have ever pulled apart a computer).  Unfortunately virtual machines do not have any “batteries”.  When a virtual machine is turned off there is no component that keeps track of time for it.  Instead – whenever you start a virtual machine we take the time from the management operating system and put this into the real-time clock of the virtual machine.

This is done without the use of the Hyper-V time synchronization integration servers (it happens long before the integration services have loaded). 

The downside of this approach is that this does not take into account any potential time zone differences between the management operating system and the guest operating system.  The reason for this is that “time zones” are a construct of the software that runs in a virtual machine – and is not communicated to the virtual hardware in any way.  So – in short – when we start a virtual machine there is no way for us to know what time zone the guest operating system believes it is in.

One partial mitigation we have for this issue is that when the Hyper-V time synchronization component loads for the first time – it does an initial user mode set of the time to ensure that the time gets corrected as quickly as possible (using the same technique as discussed in problem #2).

So now that you understand how this all works – let’s discuss some common issues and questions around virtual machines and time synchronization.

Question #1 – I have a virtual machine that is configured for a different time zone to the management operating system.  Should I disable the time synchronization component of Hyper-V?

No, no, no, no, no, no, no.  And I say again – no.  As I have mentioned above – all time synchronization that is done by the Hyper-V time synchronization integration service is time zone aware.  If you disable the Hyper-V time synchronization integration service you will disable all the time synchronization aspects of Hyper-V that are time zone aware – and only leave the initial RTC synchronization active – which is not time zone aware.

This means that your virtual machines will go from booting in the wrong time zone, and then being corrected as soon as the Hyper-V time synchronization integration service loads to booting in the wrong time zone and staying in the wrong time zone.

Question #2 – Is there any way that I can stop Hyper-V from putting the wrong time in the RTC at boot?

In short; no.  We need to put something in there – and that is the best thing that we have to work with.

Question #3 – Can’t you use UTC time in the RTC so that the correct time is established when the virtual machine boots?

UTC (which is the computer techy version of saying GMT) time would solve this problem nicely with only one problem.  Windows does not support UTC time in the BIOS (Linux does).  So while this would solve the problem for our Linux running user base – the fact of the matter is that most of our users run Windows – and this would not work for them.

Question #4 – What about if I am using a different time synchronization source (e.g. domain time or a remote time server)?

Hyper-V time synchronization was designed to “get along well” with other time synchronization sources.  You should not need to disable Hyper-V time synchronization in order to use a different time synchronization source – as long as it goes through the Windows time synchronization infrastructure.

In fact – if you are running a Domain Controller inside a virtual machine I would recommend that you leave Hyper-V time synchronization enabled but that you also setup an external time source.  You can do this by going to this KB article: following the steps outlined in the “Configuring the Windows Time service to use an external time source” section.

UPDATE 11/22: I should have mentioned: since virtual machines tend to lose time much faster than physical computer, you need to configure any external time source to be checked frequently.  Once every 15 minutes is a good place to start.

Question #5 – How can I check what time source is being used by Windows inside of a virtual machine?

This is easy to do.  Just open an administrative command prompt and run “w32tm /query /source”.  If you are synchronizing with a remote computer – its name should be listed.  If you are using the Hyper-V time synchronization integration service you should see the following output:


If you see this output:


It means that there is no time synchronization going on for this virtual machine.  This is a very bad thing – as time will drift inside of the virtual machine.

Question #6 – Wait a minute!  My virtual machine should be synchronizing to the domain (or an external server) – but when I run that command it tells me that the Hyper-V time synchronization provider is being used!  How do I fix this!

I do not know why this happens – but sometimes it happens.  The first thing that you should do is to check that your domain does have a correctly configured authoritative time source.  There have been a small number of times when I have seen this problem being caused by the lack of an authoritative time source.

Alternatively – you can “partially disable” Hyper-V time synchronization.  The reason why I say “partially disable” is that you do not want to turn off the aspects of Hyper-V time synchronization that fix the time after a virtual machine has booted for the first time, or after the virtual machine comes back from a saved state.  No other time synchronization source can address these scenarios elegantly.

Luckily – there is a way to leave this functionality intact but still ensure that the day to day time synchronization is conducted by an external time source.  The key thing trick here is that it is possible to disable the Hyper-V time synchronization provider in the Windows time synchronization infrastructure – while still leaving the service running and enabled under Hyper-V.

To do this you will need to log into the virtual machine, open an administrative command prompt and run the following commands:

reg add HKLM\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider /v Enabled /t reg_dword /d 0

This command stops W32Time from using the Hyper-V time synchronization integration service for moment-to-moment synchronization.  Remember from earlier in this post that we do not go through the Windows time synchronization infrastructure to correct the time in the event of virtual machine boot / restore from saved state or snapshot.  So those operations are unaffected.

w32tm /config /syncfromflags:DOMHIER /update

This command tells Windows to go and look for the best time source in the domain hierarchy.  If you want to use an external time server instead you can use the commands found here:

net stop w32time & net start w32time

w32tm /resync /force

These two commands just “kick the Windows time service” to make sure the settings changes take effect immediately.

w32tm /query /source

This final command should confirm that everything is working as expected.

When you run these commands you should see something like this:


Question #7 – I have a virtual machine that has gotten ahead of time, and it never gets corrected back to the correct time.  What is going on here?

As a general rule of thumb, when time drifts inside a virtual machine it runs slower than in the real world, and the time falls behind.  We will always detect and correct this.

However, in the past, we have had reports of software problems caused when the Hyper-V time synchronization integration service decides to adjust the time back – because it believes the virtual machine is ahead of time.  To deal with this (rare) issue – we put logic in our integration service that will not change the time if the virtual machine is more than 5 seconds ahead of the physical computer.

UPDATE 11/22: I was asked how having the virtual machine in a different time zone to the Hyper-V server would affect this.  The short answer is that it does not.  The 5 second check is done after we have done the necessary time zone translation.

Question #8 – When should I disable the Hyper-V time synchronization service (either in the virtual machine settings, or inside the guest operating system)?


There are definitely times when you will want to augment the functionality of the Hyper-V time integration services with a remote time source (be it a domain source or an external time server) but the only way to get the best experience around virtual machine boot / restore operations is to leave the Hyper-V time integration services enabled.

Question: Hyper-V, CPU Load and System Clock Drift

Using Hyper-V Server, you may find that the time is drifting a lot from the actual time, especially when Guest Virtual Machines are using CPUs heavily. The host OS is also virtualized, which means that the load of the host is also making the clock drift.

How to prevent the clock from drifting

  1. Disable the Time Synchronization in the Integration Services. (Warning, this setting is defined per snapshot)
  2. Import the following registry file : 

    Windows Registry Editor Version 5.00[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\W32Time\Config] 
    Note: If you are using notepad, make sure to save the file using an Unicode encoding.
  3. If the guest OS (and the server OS) is not on a domain, type in the following to set the time source : 

    w32tm /config /manualpeerlist:”,0x01,,0x01″ /syncfromflags:MANUAL /update Note: Hosts FQDN are separated by spaces. 
  4. Run the following command to force a time synchronization 

    w32tm /resync
  5. Check that the clock is not drifting anymore by using this command : 

    w32tm /monitor /

A bit of background …

The system I’m currently working on is heavily based on time. It relies a lot on timestamps taken from various steps of the process. If for some reason the system clock is unstable, that means the data generated by the system is unreliable. It sometimes generates corrupt data, and this is not good for the business.

I was investigating a sequence of events stored in the database in order that could not have happened, because the code cannot generate it this way.

After loads of investigation looking for code issues, I stumbled upon something rather odd in my application logs, considering that each line from the same thread should be time stamped later than the previous :

2009-10-13T17:15:26.541T [INFO][][7] … 
2009-10-13T17:15:26.556T [INFO][][7] … 
2009-10-13T17:15:24.203T [INFO][][7] … 
2009-10-13T17:15:24.219T [INFO][][7] … 
2009-10-13T17:15:24.234T [INFO][][7] …

All the lines above were generated from the same thread, which means that the system time changed radically between the second and the third line. From the application point of view, the time went backward of about two seconds and that also means that during that two seconds, there were data generated in the future. This is not very good…

The Investigation

Looking at the Log4net source code, I confirmed that the time is grabbed using System.DateTime.Now call, which excludes any code issues.

Then I looked at the Windows Time Service utility, and by running the following command :

w32tm /stripchart /

I found out that the time difference from the NTP source was very different, something like 10 seconds. But the most disturbing was not the time difference itself, but the evolution of that time difference.

Depending on the load of the virtual machine, the difference would grow very large, up to a second behind in less than a minute. Both the host and the guest machines were exposing this behavior. Since Hyper-V Integration Services are by default synchronizing the clock of all the virtual machines on the guest OS, that means that the load of a single virtual machine can influence the clock of all other virtual machines. The host machine CPU load can also influence the overall clock rate, because it is also virtualized.

Trying to explain this behavior

To try and make an educated guess, the time source used by windows seems to be the TSC of the processor (by the use of the RDTSC opcode), which is virtualized. The preemption of the CPU by other virtual machines seems to have an negative effect on the counter used as a reference by windows.

The more the CPU is preempted, the more the counter drifts.

Correcting the drift

By default, the Time Service has a “phase adjustment” process that slows down or speeds up the system clock rate to match a reliable time source. The TSC counter on the physical CPU is clocked by the system Quartz (If it is still like this). The “normal” drift of that kind of component is generally not very important, and may be related to external factors like the temperature of the room. The time service can deal with that kind of slow drift.

But the default configuration does not seem to be a good fit for a time source that drifts this quickly and is rather unpredictable. We need to shorten the process of phase adjustment.

Fixing this drift is rather simple, the Time Service needs to correct the clock rate more frequently, to cope with the load of the virtual machines that slow down the clock of the host.

Unfortunately, the default parameters on Hyper-V Server R2 are those of the default member of a domain, which are defined here. The default polling period from a reliable time source is way too long, 3600 seconds, considering the drift faced by the host clock.

A few parameters need to be adjusted in the registry for the clock to stay synchronized :

  • Set the SpecialInterval value to 0x1 to force the use of SpecialPollInterval.
  • Set SpecialPollInterval to 10, to force the source NTP to be polled every 10 seconds.
  • Set the MaxAllowedPhaseOffset to 1, to force the maximum drift to 1 second before the clock is set directly, if adjusting the clock rate failed.

Using these parameters will not mean that the clock will stay perfectly stable, but at the very least it will correct itself very quickly.

It seems that there is a hidden boot.ini parameter for Windows 2003, /USEPMTIMER, which forces windows to use the ACPI timer and avoid that kind of drift. I have not been able to confirm this has any effect at all, and I cannot confirm if the OS is actually using the PM Timer or the TSC.

Windows Server 2012 File Server

In computing, a file server (or fileserver) is a computer attached to a network that provides a location for shared disk access, i.e. shared storage of computer files (such as documents, sound files, photographs, movies, images, databases, etc.) that can be accessed by the workstations that are able to reach the computer that shares the access through a computer network. The term server highlights the role of the machine in the client–server scheme, where the clients are the workstations using the storage. It is common that a file server does not perform computational tasks, and does not run programs on behalf of its clients. It is designed primarily to enable the storage and retrieval of data while the computation is carried out by the workstations.

File servers are commonly found in schools and offices, where users use a LAN to connect their client computers.

File servers may also be categorized by the method of access: Internet file servers are frequently accessed by File Transfer Protocol (FTP) or by HTTP (but are different from web servers, that often provide dynamic web content in addition to static files). Servers on a LAN are usually accessed by SMB/CIFS protocol (Windows and Unix-like) or NFS protocol (Unix-like systems). 

Design of file servers:

In modern businesses the design of file servers is complicated by competing demands for storage space, access speed, recoverability, ease of administration, security, and budget. 

The primary piece of hardware equipment for servers over the last couple of decades has proven to be the hard disk drive. Although other forms of storage are viable (such as magnetic tape and solid-state drives) disk drives have continued to offer the best fit for cost, performance, and capacity. 

1. Storage:

Since the crucial function of a file server is storage, technology has been developed to operate multiple disk drives together as a team, forming a disk array. A disk array typically has cache (temporary memory storage that is faster than the magnetic disks), as well as advanced functions like RAID and storage virtualization. Typically disk arrays increase level of availability by using redundant components other than RAID, such as power supplies. Disk arrays may be consolidated or virtualized in a SAN 

2. Network Attached Storage (NAS):

Network-attached storage (NAS) is file-level computer data storage connected to a computer network providing data access to a heterogeneous group of clients. NAS devices specifically are distinguished from file servers generally in a NAS being a computer appliance – a specialized computer built from the ground up for serving files – rather than a general purpose computer being used for serving files (possibly with other functions). In discussions of NASs, the term “file server” generally stands for a contrasting term, referring to general purpose computers only.

As of 2010 NAS devices are gaining popularity, offering a convenient method for sharing files between multiple computers. Potential benefits of network-attached storage, compared to non-dedicated file servers, include faster data access, easier administration, and simple configuration.[3]

NAS systems are networked appliances containing one or more hard drives, often arranged into logical, redundant storage containers or RAID arrays. Network Attached Storage removes the responsibility of file serving from other servers on the network. They typically provide access to files using network file sharing protocols such as NFS, SMB/CIFS (Server Message Block/Common Internet File System), or AFP

A. RAID (redundant array of independent disks) is a data storage virtualization technology that combines multiple physical disk drive components into a single logical unit for the purposes of data redundancy, performance improvement, or both. Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance. The different schemes, or data distribution layouts, are named by the word RAID followed by a number, for example RAID 0 or RAID 1. Each schema, or RAID level, provides a different balance among the key goals: reliabilityavailabilityperformance, and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as well as against failures of whole physical drives.   

      RAID Standard levels: 

·       RAID 0 consists of striping, without mirroring or parity. The capacity of a RAID 0 volume is the sum of the capacities of the disks in the set, the same as with a spanned volume. There is no added redundancy for handling disk failures, just as with a spanned volume. Thus, failure of one disk causes the loss of the entire RAID 0 volume, with reduced possibilities of data recovery when compared with a broken spanned volume. Striping distributes the contents of files roughly equally among all disks in the set, which makes concurrent read or write operations on the multiple disks almost inevitable and results in performance improvements. The concurrent operations make the throughput of most read and write operations equal to the throughput of one disk multiplied by the number of disks. Increased throughput is the big benefit of RAID 0 versus spanned volume, at the cost of increased vulnerability to drive failures. 

·       RAID 1 consists of data mirroring, without parity or striping. Data is written identically to two drives, thereby producing a “mirrored set” of drives. Thus, any read request can be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that accesses the data first (depending on its seek time and rotational latency), improving performance. Sustained read throughput, if the controller or software is optimized for it, approaches the sum of throughputs of every drive in the set, just as for RAID 0. Actual read throughput of most RAID 1 implementations is slower than the fastest drive. Write throughput is always slower because every drive must be updated, and the slowest drive limits the write performance. The array continues to operate as long as at least one drive is functioning. 

·       RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation is synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code parity is calculated across corresponding bits and stored on at least one parity drive. This level is of historical significance only; although it was used on some early machines (for example, the Thinking Machines CM-2), as of 2014 it is not used by any commercially available system. 

·       RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential byte is on a different drive. Parity is calculated across corresponding bytes and stored on a dedicated parity drive. Although implementations exist, RAID 3 is not commonly used in practice. 

·       RAID 4 consists of block-level striping with dedicated parity. This level was previously used by NetApp, but has now been largely replaced by a proprietary implementation of RAID 4 with two parity disks, called RAID-DP. The main advantage of RAID 4 over RAID 2 and 3 is I/O parallelism: in RAID 2 and 3, a single read/write I/O operation requires reading the whole group of data drives, while in RAID 4 one I/O read/write operation does not have to spread across all data drives. As a result, more I/O operations can be executed in parallel, improving the performance of small transfers. 

·       RAID 5 consists of block-level striping with distributed parity. Unlike RAID 4, parity information is distributed among the drives, requiring all drives but one to be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 requires at least three disks. RAID 5 implementations are susceptible to system failures because of trends regarding array rebuild time and the chance of drive failure during rebuild (see “Increasing rebuild time and failure probability” section, below). Rebuilding an array requires reading all data from all disks, opening a chance for a second drive failure and the loss of the entire array. In August 2012, Dell posted an advisory against the use of RAID 5 in any configuration on Dell EqualLogic arrays and RAID 50 with “Class 2 7200 RPM drives of 1 TB and higher capacity” for business-critical data. 

·       RAID 6 consists of block-level striping with double distributed parity. Double parity provides fault tolerance up to two failed drives. This makes larger RAID groups more practical, especially for high-availability systems, as large-capacity drives take longer to restore. RAID 6 requires a minimum of four disks. As with RAID 5, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced. With a RAID 6 array, using drives from multiple sources and manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead of RAID 5. RAID 10 also minimizes these problems. 

B. Nested (hybrid) RAID:

In what was originally termed hybrid RAID, many storage controllers allow RAID levels to be nested. The elements of a RAID may be either individual drives or arrays themselves. Arrays are rarely nested more than one level deep. 

The final array is known as the top array. When the top array is RAID 0 (such as in RAID 1+0 and RAID 5+0), most vendors omit the “+” (yielding RAID 10 and RAID 50, respectively). 

·       RAID 0+1 creates two stripes and mirrors them. If a single drive failure occurs then one of the stripes has failed, at this point you are running effectively as RAID 0 with no redundancy, significantly higher risk is introduced during a rebuild than RAID 1+0 as all the data from all the drives in the remaining stripe has to be read rather than just from 1 drive increasing the chance of an unrecoverable read error (URE) and significantly extending the rebuild window. 

·       RAID 1+0 creates a striped set from a series of mirrored drives. The array can sustain multiple drive losses so long as no mirror loses all its drives. 

·       JBOD RAID N+N With JBOD (Just a Bunch Of Disks), it is possible to concatenate disks, but also volumes such as RAID sets. With larger drive capacities, write and rebuilding time may increase dramatically (especially, as described above, with RAID 5 and RAID 6). By splitting larger RAID sets into smaller subsets and concatenating them with JBOD, write and rebuilding time may be reduced. If a hardware RAID controller is not capable of nesting JBOD with RAID, then JBOD can be achieved with software RAID in combination with RAID set volumes offered by the hardware RAID controller. There is another advantage in the form of disaster recovery, if a small RAID subset fails, then the data on the other RAID subsets is not lost, reducing restore time.

What is Spanned Volume?: 

When talking of SPanned Volume we are brought to the topic of Non-RAID drive architectures.

          C. Non-RAID drive architectures:

The most widespread standard for configuring multiple hard disk drives is RAID (Redundant Array of Inexpensive/Independent Disks), which comes in a number of standard configurations and non-standard configurationsNon-RAID drive architectures also exist, and are referred to by acronyms with similarity to RAID.

  • JBOD (derived from “just a bunch of disks“): described multiple hard disk drives operated as individual independent hard disk drives. JBOD (abbreviated from “just a bunch of disks/drives“) is an architecture using multiple hard drives exposed as individual devices. Hard drives may be treated independently or may be combined into a one or more logical volumes using a volume manager like LVM or mdadm; such volumes are usually called “spanned” or “linear | SPAN | BIG”. A spanned volume provides no redundancy, so failure of a single hard drive amounts to failure of the whole logical volume. Redundancy for resilience and/or bandwidth improvement may be provided, in software, at a higher level.
  • SPAN or BIG: A method of combining the free space on multiple hard disk drives from “JBoD” to create a spanned volume. Such a concatenation is sometimes also called BIG/SPAN. A SPAN or BIG is generally a spanned volume only, as it often contains mismatched types and sizes of hard disk drives.  Concatenation or spanning of drives is not one of the numbered RAID levels, but it is a popular method for combining multiple physical disk drives into a single logical disk. It provides no data redundancy. Drives are merely concatenated together, end to beginning, so they appear to be a single large disk. It may be referred to as SPAN or BIG (meaning just the words “span” or “big”, not as acronyms).  What makes a SPAN or BIG different from RAID configurations is the possibility for the selection of drives. While RAID usually requires all drives to be of similar capacity[a] and it is preferred that the same or similar drive models are used for performance reasons, a spanned volume does not have such requirements.
  • MAID (derived from “massive array of idle drives“): an architecture using hundreds to thousands of hard disk drives for providing nearline storage of data, primarily designed for “Write Once, Read Occasionally” (WORO) applications, in which increased storage density and decreased cost are traded for increased latency and decreased redundancy.

Network-attached storage removes the responsibility of file serving from other servers on the network. They typically provide access to files using network file sharing protocols such as NFSSMB/CIFS, or AFP. From the mid-1990s, NAS devices began gaining popularity as a convenient method of sharing files among multiple computers. Potential benefits of dedicated network-attached storage, compared to general-purpose servers also serving files, include faster data access, easier administration, and simple configuration. 


A NAS unit is a computer connected to a network that provides only file-based data storage services to other devices on the network. Although it may technically be possible to run other software on a NAS unit, it is usually not designed to be a general-purpose server. For example, NAS units usually do not have a keyboard or display, and are controlled and configured over the network, often using a browser.

A full-featured operating system is not needed on a NAS device, so often a stripped-down operating system is used. For example, FreeNAS or NAS4Free, both open source NAS solutions designed for commodity PC hardware, are implemented as a stripped-down version of FreeBSD.

NAS systems contain one or more hard disk drives, often arranged into logical, redundant storage containers or RAID.

NAS uses file-based protocols such as NFS (popular on UNIX systems), SMB/CIFS (Server Message Block/Common Internet File System) (used with MS Windows systems), AFP (used with Apple Macintosh computers), or NCP (used with OES and Novell NetWare). NAS units rarely limit clients to a single protocol.

         Vs. DAS:

The key difference between direct-attached storage (DAS) and NAS is that DAS is simply an extension to an existing server and is not necessarily networked. NAS is designed as an easy and self-contained solution for sharing files over the network.

Both DAS and NAS can potentially increase availability of data by using RAID or clustering.

When both are served over the network, NAS could have better performance than DAS, because the NAS device can be tuned precisely for file serving which is less likely to happen on a server responsible for other processing. Both NAS and DAS can have various amount of cache memory, which greatly affects performance. When comparing use of NAS with use of local (non-networked) DAS, the performance of NAS depends mainly on the speed of and congestion on the network.

NAS is generally not as customizable in terms of hardware (CPU, memory, storage components) or software (extensions, plug-ins, additional protocols) as a general-purpose server supplied with DAS.

         Vs. SAN:

NAS provides both storage and a file system. This is often contrasted with SAN (Storage Area Network), which provides only block-based storage and leaves file system concerns on the “client” side. SAN protocols include Fibre ChanneliSCSIATA over Ethernet (AoE) and HyperSCSI.

One way to loosely conceptualize the difference between a NAS and a SAN is that NAS appears to the client OS (operating system) as a file server (the client can map network drives to shares on that server) whereas a disk available through a SAN still appears to the client OS as a disk, visible in disk and volume management utilities (along with client’s local disks), and available to be formatted with a file system and mounted.

Despite their differences, SAN and NAS are not mutually exclusive, and may be combined as a SAN-NAS hybrid, offering both file-level protocols (NAS) and block-level protocols (SAN) from the same system. An example of this is Openfiler, a free software product running on Linux-based systems. A shared disk file system can also be run on top of a SAN to provide filesystem service.


NAS is useful for more than just general centralized storage provided to client computers in environments with large amounts of data. NAS can enable simpler and lower cost systems such as load-balancing and fault-tolerant email and web server systems by providing storage services. The potential emerging market for NAS is the consumer market where there is a large amount of multi-media data. Such consumer market appliances are now commonly available. Unlike their rackmounted counterparts, they are generally packaged in smaller form factors. The price of NAS appliances has plummeted in recent years, offering flexible network-based storage to the home consumer market for little more than the cost of a regular USB or FireWire external hard disk. Many of these home consumer devices are built around ARMPowerPC or MIPS processors running an embedded Linux operating system.

         Clustered NAS:

clustered NAS is a NAS that is using a distributed file system running simultaneously on multiple servers. The key difference between a clustered and traditional NAS is the ability to distribute[citation needed] (e.g. stripe) data and metadata across the cluster nodes or storage devices. Clustered NAS, like a traditional one, still provides unified access to the files from any of the cluster nodes, unrelated to the actual location of the data. 

3. Security:

File servers generally offer some form of system security to limit access to files to specific users or groups. In large organizations, this is a task usually delegated to what is known as directory services such as openLDAP, Novell’s eDirectory or Microsoft’s Active Directory.

These servers work within the hierarchical computing environment which treat users, computers, applications and files as distinct but related entities on the network and grant access based on user or group credentials. In many cases, the directory service spans many file servers, potentially hundreds for large organizations. In the past, and in smaller organizations, authentication could take place directly at the server itself.

File and Storage Services:

File and Storage Services includes technologies that help you set up and manage one or more file servers, which are servers that provide central locations on your network where you can store files and share them with users. If your users need access to the same files and applications, or if centralized backup and file management are important to your organization, you should set up one or more servers as a file server by installing the File and Storage Services role and the appropriate role services.

The File and Storage Services role and the Storage Services role service are installed by default, but without any additional role services. This basic functionality enables you to use Server Manager or Windows PowerShell to manage the storage functionality of your servers. However, to set up or manage a file server, you should use the Add Roles and Features Wizard in Server Manager or the Install-WindowsFeature Windows PowerShell cmdlet to install additional File and Storage Services role services, such as the role services discussed in this topic.

Practical applications

Administrators can use the File and Storage Services role to set up and manage multiple file servers and their storage capabilities by using Server Manager or Windows PowerShell. Some of the specific applications include the following:

Storage Spaces – Use to deploy high availability storage that is resilient and scalable by using cost-effective industry-standard disks.

Folder Redirection, Offline Files, and Roaming User Profiles – Use to redirect the path of local folders (such as the Documents folder) or an entire user profile to a network location, while caching the contents locally for increased speed and availability.

Work Folders – Use to enable users to store and access work files on personal PCs and devices, in addition to corporate PCs. Users gain a convenient location to store work files and access them from anywhere. Organizations maintain control over corporate data by storing the files on centrally managed file servers and optionally specifying user device policies (such as encryption and lock screen passwords). Work Folders is a new role service in Windows Server 2012 R2.

Data Deduplication – Use to reduce the disk space requirements of your files, saving money on storage.

iSCSI Target Server – Use to create centralized, software-based, and hardware-independent iSCSI disk subsystems in storage area networks (SANs).


Work Folders –  Provides a consistent way for users to access their work files from their personal computers and devices. See Work Folders for more information. 

Server Message Block –  Enhancements include automatic rebalancing of Scale-Out File Server clients, improved performance of SMB Direct, and improved SMB event messages. See What’s New in SMB for more information. 

Storage Spaces –  Enhancements include SSD and HDD storage tiers, an SSD-based write-back cache, parity space support for failover clusters, dual parity support, and greatly decreased storage space rebuild times. See What’s New in Storage Spaces for more information. 

DFS Replication –  Enhancements include database cloning for large performance gains during initial sync, a Windows PowerShell module for DFS Replication, a new DFS Replication WMI provider, faster replication on high bandwidth connections, conflict and preexisting data recovery, and support for rebuilding corrupt databases without unexpected data loss. See What’s New in DFS Replication and DFS Namespaces for more information. 

iSCSI Target Server –  Updates include virtual disk enhancements, manageability enhancements in a hosted or private cloud, and improved optimization to allow disk-level caching. See What’s New in iSCSI Target Server for more information. Provides block storage to other servers and applications on the network by using the Internet SCSI (iSCSI) standard. 

Data Deduplication –  Saves disk space by storing a single copy of identical data on the volume. 

Storage Spaces and storage pools –  Enables you to virtualize storage by grouping industry-standard disks into storage pools and then creating storage spaces from the available capacity in the storage pools. 

Unified remote management of File and Storage Services in Server Manager –  Enables you to remotely manage multiple file servers, including their role services and storage, from a single window. 


File Server

Non-RAID Drive Architectures (Concatenation – SPAN and BIG)


Network-attached Storage

REVISE: Expanding VHD or VDHX of VMs

Something to note on expanding VHD or VDHX of VMs

Question: Recover From Expanding VHD or VDHX Files On VMs With Checkpoints

Recover From Expanding VHD or VDHX Files On VMs With Checkpoints

Posted on October 1, 2015

So you’ve expanded the virtual disk (VHD/VHDX) of a virtual machine that has checkpoints (or snapshots as they used to be called) on it. Did you forget about them?  Did you really leave them lingering around for that long?  Bad practice and not supported (we don’t have production snapshots yet, that’s for Windows Server 2016). Anyway your virtual machine won’t boot. Depending on the importance of that VM you might be chewed out big time or ridiculed. But what if you don’t have a restore that works? Suddenly it’s might have become a resume generating event.

All does not have to be lost. Their might be hope if you didn’t panic and made even more bad decisions. Please, if you’re unsure what to do, call an expert, a real one, or at least some one who knows real experts. It also helps if you have spare disk space, the fast sort if possible and a Hyper-V node where you can work without risk. We’ll walk you through the scenarios for both a VHDX and a VHD.

How did you get into this pickle?

If you go to the Edit Virtual Hard Disk Wizard via the VM settings it won’t allow for that if the VM has checkpoints, whether the VM is online or not.


VHDs cannot be expanded on line. If the VM had checkpoints it must have been shut down when you expanded the VHD. If you went to the Edit Disk tool in Hyper-V Manager directly to open up the disk you don’t get a warning. It’s treated as a virtual disk that’s not in use. Same deal if you do it in PowerShell

Resize-VHD -Path “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd” -SizeBytes 15GB

That just works.

VHDXs can be expanded on online if they’re attached to a vSCSI controller. But if the VM has checkpoints it will not allow for expanding.


So yes, you deliberately shut it down to be able to do it with the the Edit Disk tool in Hyper-V Manager. I know, the warning message was not specific enough but consider this. The Edit disk tool when launched directly has no idea of what the disk you’re opening is used for, only if it’s online / locked.

Anyway the result is the same for the VM whether it was a VHD or a VHDX. An error when you start it up.

[Window Title]

Hyper-V Manager

[Main Instruction]

An error occurred while attempting to start the selected virtual machine(s).


‘DidierTest06’ failed to start.

Synthetic SCSI Controller (Instance ID 92ABA591-75A7-47B3-A078-050E757B769A): Failed to Power on with Error ‘The chain of virtual hard disks is corrupted. There is a mismatch in the virtual sizes of the parent virtual hard disk and differencing disk.’.

Virtual disk ‘C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD_8DFF476F-7A41-4E4D-B41F-C639478E3537.avhd’ failed to open because a problem occurred when attempting to open a virtual disk in the differencing chain, ‘C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd’: ‘The size of the virtual hard disk is not valid.’.

You might want to delete the checkpoint but the merge will only succeed for the virtual disk that have not been expanded.  You actually don’t need to do this now, it’s better if you don’t, it saves you some stress and extra work. You could remove the expanded virtual disks from the VM. It will boot but in many cased the missing data on those disks are very bad news. But al least you’ve proven the root cause of your problems.

If you inspect the AVVHD/AVHDX file you’ll get an error that states

The differencing virtual disk chain is broken. Please reconnect the child to the correct parent virtual hard disk.


However attempting to do so will fail in this case.

Failed to set new parent for the virtual disk.

The Hyper-V Virtual Machine Management service encountered an unexpected error: The chain of virtual hard disks is corrupted. There is a mismatch in the virtual sizes of the parent virtual hard disk and differencing disk. (0xC03A0017).


Is there a fix?

Let’s say you don’t have a backup (shame on you). So now what? Make copies of the VHDX/AVHDX or VHD/AVHD and save guard those. You can also work on copies or on the original files.I’ll just the originals as this blog post is already way too long. If you. Note that some extra disk space and speed come in very handy now. You might even copy them of to a lab server. Takes more time but at least you’re not working on a production host than.

Working on the original virtual disk files (VHD/AVHD and / or VHDX/AVHDX)

If you know the original size of the VHDX before you expanded it you can shrink it to exactly that. If you don’t there’s PowerShell to the rescue if you want to find out the minimum size.


But even better you can shrink it to it’s minimum size, it’s a parameter!

Resize-VHD -Path “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd” -ToMinimumSize

Now you not home yet. If you restart the VM right now it will fail … with the following error:

‘DidierTest06’ failed to start. (Virtual machine ID 7A54E4DB-7CCB-42A6-8917-50A05354634F)

‘DidierTest06’ Synthetic SCSI Controller (Instance ID 92ABA591-75A7-47B3-A078-050E757B769A): Failed to Power on with Error ‘The chain of virtual hard disks is corrupted. There is a mismatch in the identifiers of the parent virtual hard disk and differencing disk.’ (0xC03A000E). (Virtual machine ID 7A54E4DB-7CCB-42A6-8917-50A05354634F)


What you need to do is reconnect the AVHDX to it’s parent and choose to ignore the ID mismatch. You can do this via Edit Disk in Hyper-V Manager of in PowerShell. For more information on manually merging & repairing checkpoints see my blogs on this subject here. In this post I’ll just show the screenshots as walk through.


Once that’s done you’re VHDX is good to go.

For a VHD you can’t shrink that with the inbox tools. There is however a free command line tool that can do that names VHDTool.exe. The original is hard to find on the web so here is the installer if you need it. You only need the executable, which is portable actually, don’t install this on a production server. It has a repair switch to deal with just this occurrence!

Here’s an example of my lab …

D:\SysAdmin>VhdTool.exe /repair “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd” “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD_8DFF476F-7A41-4E4D-B41F-C639478E3537.avhd”


That’s it for the VHD …

Winking smile

You’re back in business!  All that’s left to do is get rid of the checkpoints. So you delete them. If you wanted to apply them an get rid of the delta, you could have just removed the disks, re-added the VHD/VHDX and be done with it actually. But in most of these scenarios you want to keep the delta as you most probably didn’t even realize you still had checkpoints around. Zero data loss .


Save your self the stress, hassle and possibly expense of hiring an expert.  How? Please do not expand a VHD or VHDX of a virtual machine that has checkpoints. It will cause boot issues with the expanded virtual disk or disks! You will be in a stressful, painful pickle where you might not get out of if you make the wrong decisions and choices!

As a closing note, you must have have backups and restores that you have tested. Do not rely on your smarts and creativity or that others, let alone luck. Luck runs out. Otions run out. Even for the best and luckiest of us. VEEAM has save my proverbial behind a few times already.

Question: Hyper-V VHD Expand and keeping snapshots?

1down votefavorite


I have a Virtual Machine and it includes OS and other programs of course. ATM I have around 5 Snapshots.

The Disk Space is running low and I wanted to expand the VHD. I enter VM’s settings and intend to edit the VHD and expand the VHD but all i found was this messege.

enter image description here

So are there other ways of expanding a VHD? I simple wondered if there is a way to keep the snapshots and expand the virtual hard disk. I realize that there might be problems if i remove them and then import them again.


Remove your snapshots and then expand the disk. You should read up on how snapshots work, because that will explain why expanding the underlying VHD will be bad news for the delta disks.


0down vote

It may not have been possible when the original answer was written, but you can merge your snapshots into a new VHDX.

  1. Look up the path to the disk image for the snapshot you want to expand in the snapshot settings.
  2. Using the Hyper-V Manager -> Action -> Edit Disk, select the snapshot .avhx file.
  3. Through the wizard, choose to merge the snapshot disk to a new dynamic .vhdx image. You should be able to configure a new size at this point.
  4. When it’s done, you can change your snapshot settings to point to the new .vhdx image.

VMWare/Hyper-V Snapshot Issue

Can not edit vhd, because the system thinks snapshots exist, but I disagree

Never resize a VHD with snapshots or differencing disks – more about differencing disks

Q & A: Expanding VHDX files

Expanding a VHDX Disk on Hyper-V

REVISE: Hyper-V Remote Management Using Powershell

Question: remote management powershell

PowerShell Remoting is Secure

 PowerShell Enables Any Version of Windows to Remotely Manage Any Other Version of Windows (and Hyper-V)

A very common complaint, and sometimes an outright problem, is that Hyper-V Manager can only fully control versions of Hyper-V that are running on the same code base. Hyper-V Manager in Windows 7 can’t control anything after the version of Hyper-V that released with the Windows 7/Windows Server 2008 R2 code base. Windows 8 or later was required. Starting in Windows 8/Server 2012, Hyper-V Manager can usually manage down-level hosts, but some people have troubles even with that.

With PowerShell, there’s no problem. PowerShell Remoting was introduced in PowerShell 2.0, and since then, PowerShell Remoting has worked perfectly well both up-level and down-level. The following is a screenshot of a Windows 7 installation with native PowerShell 2.0 remotely controlling a Hyper-V 2012 R2 server with native PowerShell 4.0:

PSRemoting Different Versions

PSRemoting Different Versions

How to Enable PowerShell Remoting for Hyper-V

Both the local and remote systems must be set up properly for PowerShell Remoting to work. The first thing that you must do on both sides is:

1Enable-PSRemoting -Force

On non-domain-joined systems, I received an “Access Denied” error unless I used the real Administrator account; just using an account in the local administrators group wasn’t enough. This seems to be at odds with the normal Windows authentication model and was just plain annoying for Windows 7, which disables the Administrator account by default. You can try the SkipProfileCheck parameter… it might help.

Installing the PowerShell Hyper-V Module

Not surprisingly, the easiest way to install the Hyper-V PowerShell module is with PowerShell. This works on Windows 8 and later, Windows Server 2012 and later, and Hyper-V Server 2012 and later:

1Install-WindowsFeature Hyper-V-PowerShell

If you’d like to install Hyper-V Manager along with the PowerShell module:

1Install-WindowsFeature RSAT-Hyper-V-Tools

If you’d like to install both of these tools along with Hyper-V:

1Install-WindowsFeature Hyper-V -IncludeManagementTools

If you’d rather take the long way through the GUI for some reason, your approach depends on whether or not you’re using a desktop or a server operating system.

For a desktop, open Turn windows features on or off via the Control Panel. Open the Hyper-V tree, then the Hyper-V Management Tools subtree, and check Hyper-V Module for Windows PowerShell (along with anything else that you’d like).

Hyper-V PowerShell Module on Windows

Hyper-V PowerShell Module on Windows

For a Server operating system, start in Server Manager. Click Add roles and features. Click through all of the screens in the wizard until you reach the Features page. Expand Remote Server Administration Tools, then Hyper-V Management Tools, and check Hyper-V Module for Windows PowerShell (along with anything else that you’d like).

Server Hyper-V PowerShell Module

Server Hyper-V PowerShell Module

Once the module is installed, you can use it immediately without rebooting. You might need to import it if you’ve already got an open PowerShell session, or you could just start a new session.

Implicit PowerShell Remoting in the Hyper-V Module

The easiest way to start using PowerShell Remoting is with implicit remoting. As a general rule, cmdlets with a ComputerName parameter are making use of implicit remoting. What that means is that all of the typing is done on your local machine, but all of the action occurs on the remote machine. Everything that the Hyper-V module does is in WMI, so this means that all of the commands that you type are being sent to the VMMS service on the target host to perform. If you are carrying this out interactively, the results are then returned to your console in serializedform.

To make use of implicit remoting with the Hyper-V module, you must have it installed on your local computer. There are more limitations on implicit remoting:

  • You can only (legally) install the Hyper-V PowerShell module that matches your local Windows version
  • You are limited to the functionality exposed by your local Hyper-V PowerShell module
  • No matter the local version, it cannot be used to manage 2008 R2 or lower target hosts
  • Implicit remoting doesn’t always work if the local and remote Hyper-V versions are different

For example, I cannot install the Hyper-V PowerShell module at all on Windows 7. As another example, the Hyper-V PowerShell module in Windows 10 cannot control a 2012 R2 environment. Basically, the Hyper-V PowerShell module on your local system follows similar rules as Hyper-V Manager on your local system. I want to reiterate that these rules only apply to implicit remoting; you can still explicitly operate any cmdlet in any module that exists on the target.

For example, from my Windows 10 desktop, I run Get-VM -CompterName svhv01 against my Windows Server 2016 TP5 host:

Implicit Remote Get-VM

Implicit Remote Get-VM

Behind the scenes, it is directing WMI on SVHV01 to run “SELECT * FROM Msvm_ComputerSystem” against its “root\virtualization\v2” namespace. It is then processing the results through a local view filter.

Implicit Remoting Against Several Hosts

I was saying how PowerShell Remoting can be used against multiple machines at once. Any time that a cmdlet’s ComputerName parameter supports a string array, you can operate it against multiple machines. Run Get-Help against the cmdlet in question and check to see if the ComputerName parameter has square brackets:

ComputerName with Array Support

ComputerName with Array Support

As you can see, Get-VM has square brackets in its ComputerName parameter (the [<String[]>]] part), so it can accept multiple hosts. Example:

1Get-VM -ComputerName svhv01, svhv02

Locking Implicit Remoting to a Specific Host

I have a complex environment with several hosts, so I am content to always specify the -ComputerName parameter as necessary. If you’ve only got a single host, then you might like to avoid typing that ComputerName parameter each time. To do that, open up your PowerShell profile and enter the following:

123Get-Command –Module Hyper-V –Verb Get | foreach { $PSDefaultParameterValues.Add(“$($_.Name):ComputerName”,”TARGETHOSTNAME”)}

Just replace TARGETHOSTNAME with the name or IP address of your remote host. From that point onward, any time you open any PowerShell prompt using the modified profile, all cmdlets from the Hyper-V module that start with “Get” will be automatically injected with “-ComputerName TARGETHOSTNAME”.

Implicit Remoting and the Pipeline

It can take some time and practice to become accustomed to how the pipeline works with implicit remoting, not least of which because some cmdlets behave differently. As a general rule, the pipeline brings items back to your computer.

So, let’s say you did this:

1Get-VM -ComputerName | Export-CSV -Path d:\temp\svhv02VMList.csv -NoTypeInformation

Where do you expect to find the file? On the remote host or the local host?

Implicit Remoting with a Pipeline

Implicit Remoting with a Pipeline

The file was created on my local system (if you looked closely at my screenshot and it made you curious, the file is zero length because there are no VMs on that system, not because it failed).

So, what this behavior means to you is that if you choose to then carry out operations on the objects such as Set cmdlets, you might need to use implicit remoting again after a pipeline… but then again, you might not. Which of the following do you think is correct?

  1. 1Get-VM -ComputerName svhv01 | Set-VMMemory -MaximumBytes 2.5GB
  2. 1Get-VM -ComputerName svhv01 | Set-VMMemory -ComputerName svhv01 -MaximumBytes 2.5GB

If you said #1, then you’re right! But, do you know why? It’s actually fairly simple to tell. Just look at the object that is crossing the pipeline:

Computer Name on Object

Computer Name on Object

The object contains its computer name and the cmdlets in the Hyper-V module are smart enough to process it as an object that contains a computer name. To then specify the computer name again for the cmdlet to the right of the pipeline just confuses PowerShell and causes it to error. Not all objects have a ComputerName property and not all cmdlets know to look for it. Furthermore, if you do anything to strip away that property and then try to pipe it to another cmdlet, you will need to specify ComputerName again. For example:

1Get-VM -ComputerName svhv01 | select Name | % { Set-VMMemory -ComputerName svhv01 -MaximumBytes 2.5GB -VMName $_.Name }

Explicit Remoting

If the remote host has the PowerShell module installed, you can establish a session to it and begin work immediately. Natively, this requires the target system to be 2012 or later, as there was no official Hyper-V PowerShell module in prior versions. There is an unofficial release for 2008 R2 (and maybe 2008, I never tried). Explicit remoting is “harder” than implicit remoting (requires more typing) but is much more powerful and there are no fancy rules governing it. If you can connect to the remote machine using PowerShell Remoting, then you can operate any PowerShell cmdlets there (for the PowerShell experts, just imagine that there is an asterisk right here that links to a mention of Constrained Endpoints).

There are two general ways to use explicit remoting. The first is to use Enter-PSSession. That drops you right onto the remote console as a sort of console-within-a-console. From there, you can work interactively. The second method is to encapsulate script in a block and feed it to Invoke-Command. That method is used for scripting and automatically cleans up after itself.

PowerShell Remoting in an Interactive Session

The simplest way to remotely connect to an interactive PowerShell session is:

1Enter-PSSession hostname

As shown, the command only works between machines on the same domain and when the current user account has sufficient privileges. I showed this above, but for the sake of completeness, to connect when one of the computers is unjoined or untrusted and/or if the user account is not administrative:

1Enter-PSSession hostname -Credential (Get-Credential)

This will securely prompt you for the credentials to use on the target. If the target is domained-joined, make sure you use the format of domain\username. If it’s a standalone system, you can use the username by itself or computername\username.

If the remote system is using SSL:

1Enter-PSSession hostname -Credential (Get-Credential) -UseSSL

Once the connection is established, it will change the prompt to reflect that you’re accessing the remote computer. You can see examples in the screenshots above. It will look like this:

1[hostname]: PS C:\Users\UserAccount\Documents>

One thing I generally avoid in instructional text is using positional parameters. I especially dislike mixing positional and named parameters. I’ve done both here for the sake of showing you how uncomplicated PowerShell Remoting is to use. For the purposes of delivering a proper education, be aware that the named parameter that you use to specify the host name is -ComputerName . As long as it’s the first parameter submitted to the cmdlet, you don’t have to type it out.

Once you’re connected, it’s mostly like you were sitting at a PowerShell prompt on the remote system. Be aware that any custom PowerShell profile you have on that host isn’t loaded. Also note that whatever you’re doing on that remote system stays on that system. For instance, you can’t put something into a variable and then call that variable from your system after you exit the session.

When you’re done, you can just close the PowerShell window. PowerShell will clean up for you. If you want to go back to working on your computer:


I much prefer the shorter alias:


Using PowerShell Remoting to Address the Remote Device Manager Problem

Now that you know how to connect to a remote PowerShell session, you have the ability to overcome one of the long-standing remote management challenges of both Windows and Hyper-V Server. Prior to the 2012 versions, you could remotely connect to Device Manager, but only in a read-only mode. Starting in 2012, even that is gone.

You can get driver information for a lot of devices using PowerShell. For example, Get-NetAdapter returns several Driver fields. But what about installing or upgrading drivers? That was never possible using Device Manager remotely, or even through other remote tools. Well, with PowerShell Remoting, the problem is solvable.

You’re not restricted to running PowerShell commands inside your remote session. You can run executables, batch files, and other such things. The only things you can’t do is initiate a GUI or start anything that has its own shell. Fortunately, one of the things that can be run is pnputil. This utility can be used to manage drivers on a system. So, with PowerShell Remoting, you can remotely install and upgrade drivers.

My systems use Broadcom NICs as their management adapters. I downloaded the drivers and transferred them into local drives on my hosts. Then, using PowerShell Remoting from my desktop, I connected in and used pnputil to install them. The command to install a driver is:

1pnputil -i -a driverfile.inf

You can see the results for yourself:

Remote PNPUTIL Driver Installation

You can see that, as expected, my network connection was interrupted. What’s not shown is that PowerShell used a progress bar to show its automatic reconnection attempts. Once the driver was installed, the session automatically picked up right where it left off.

For verification, you can use pnputil -e :

Remote PNPUTIL Driver Enumeration

Windows replaces the original driver file name with OEM#, as you can see here, but it keeps the manufacturer name and the driver version and date. If you want further verification, you can also run Get-NetAdapter | fl Driver* .

Advanced PowerShell Remoting with Invoke-Command

Here’s where the fun begins. Where the remote session usage shown above is great for addressing immediate needs, the true power of PowerShell Remoting is in connecting to multiple machines. Invoke-Command is the tool of choice:

1Invoke-Command -ComputerName svhv1, svhv2 -ScriptBlock { Get-VM }

If you run the above on systems prior to 2016, the first thing you’ll likely notice is that there’s no prettification of the output. Get-VM usually looks like this:

Demo Get-VM

That’s because there’s a defined custom formatting view being applied. When an object crosses back to a source system across Invoke-Command , no view is applied. What you get is mostly the same thing you’d see if you piped it through  Format-List -Property *  (mostly seen as  | fl *  ).

At this point, it might not make any sense why we’re doing this. This is the same output that we got from the implicit remoting earlier, but it required more typing, and more to remember.

If you have any VMs, the above cmdlets produce a wall of text. Let’s slim it down a bit. From PowerShell 2.0 in Windows 7, I ran  Invoke-Command -Computer svhv1, svhv2 -Credential (Get-Credential) -UseSSL -ScriptBlock { Get-VM | select Name }. Here’s the output:

Get-VM through Invoke-Command

Running the same script block locally would have resulted in a table with just the name column. Here, I get three more: PSComputerName, RunspaceId, and PSShowComputerName. In PowerShell 3.0 and later, the PSShowComputerName column isn’t there anymore. The benefit here is that you can use these fields to sort the output by the system that sent it.

You can use  -HideComputer with  Invoke-Command  to suppress the output of all the extra fields if your source system is running PowerShell 3.0 or later. For PowerShell 2.0, RunspaceId is still shown but the others are hidden. They’re still there, so you can query against them. What’s nice about this is, if your system has the related module installed, then any custom formatting views will be applied just as if you were running inside a connected session:

Invoked Get-VM with -HideComputerName

This formatting issue is no longer a concern in Windows 10/Windows Server 2016 (which I suspect is more due to changes in PowerShell 5 than in the Hyper-V module):

PowerShell Remoting Formatting in 2016

PowerShell Remoting Formatting in 2016

Being able to format the output will always be a useful skill even with the basic formatting issues automatically addressed.

It might not make sense why we’re doing things this way. The implicit remoting method that I showed you earlier did just as well, and it required less to type (and memorize). The first reason that you’d use this method is because it doesn’t matter if the local computer and the remote computer are running the same versions of anything. The PowerShell 2.0 examples on Windows 7 that I showed you were running against a Hyper-V Server 2012 R2 environment. Neither the Windows 7 nor my Windows 10 environment can even run implicit remoting against those systems.

Even more importantly, this doesn’t begin to show the true power of PowerShell Remoting. Let’s do some interesting things. For instance, looking at VM output is fun and all, but why stop there? How about:

$RemoteVMs = Invoke-Command -Computer svhv1, svhv2 -Credential (Get-Credential) -UseSSL -ScriptBlock { Get-VM }

What I’ve done here is connect in to both hosts, retrieve their virtual machines, store them in a variable on my computer, and then disconnected the remote session. I can store them, format the output, build reports, etc. What I can’t do is make any changes to them, but that’s OK. I’ve got a couple of answers to that.

First, I can perform the modifications right on the target system by using a more complicated script block. The following example builds a script that retrieves all the VMs that aren’t set to start automatically and sets them so that they do. That entire script is assigned to a variable named “RemoteVMManipulation”. I use that as the  -ScriptBlock parameter in an Invoke-Command , which I send to each of the hosts. The result of the script is saved to a variable:

123456789$RemoteVMManipulationBlock = {    $VMsNotStartingAutomatically = Get-VM | where { $_.AutomaticStartAction -ne “Start” }    foreach ($VM in $VMsNotStartingAutomatically)    {        Set-VM -VM $VM -AutomaticStartAction Start    }    $VMsNotStartingAutomatically}$ModifiedVMs = Invoke-Command -ComputerName svhv1, svhv2 -ScriptBlock $RemoteVMManipulationBlock

This isn’t the most efficient script block, but I wrote it that way for illustration purposes. The variable “VMsNotStartingAutomatically” is created on each of the remote systems, but is destroyed as soon as the script block exits. It is not retrievable or usable on my calling system. However, I’ve placed the combined output into a variable named “ModifiedVMs”. Like a local function call, the output is populated by whatever was in the pipeline at the end of the script block’s execution. In this case, it’s the “VMsNotStartingAutomatically” array. Upon return, this array is transferred to the “ModifiedVMs” variable, which lives only on my system. In subsequent lines of the above script, I can view the VM objects that were changed even though the remote sessions are closed.

The second way to manipulate the objects that were returned is to transmit them back to the remote hosts using the -InputObject  parameter and keep track of them with the added “PSComputerName” field:

12345678910111213$RemoteVMManipulationBlock = {    foreach ($VMList in $input)    {        foreach($VM in $VMList)        {            if($VM.PSComputerName -eq $env:COMPUTERNAME)            {                Set-VM -VMName $VM.Name -AutomaticStartAction StartIfRunning            }        }    }}Invoke-Command -ComputerName svhv1, svhv2 -ScriptBlock $RemoteVMManipulationBlock -InputObject $ModifiedVMs

What I’ve done here is send the “ModifiedVMs” variable from my system into each of the target systems using the-InputObject parameter. Once inside the script block, you reference this variable with $input . There are a few things to note here. For one, you’ll notice that I had to unpack the “ModifiedVMs” variable two times. For another, I wasn’t able to reference the input items as VM objects. Instead, I had to point it to the names of the VMs. This is because we’re not sending in true VM objects. We’re sending in what we got. GetType() reveals them as:


Because they’re a different object type, parameters expecting an object of the type “Microsoft.HyperV.PowerShell.VirtualMachine” will not work. Objects returned from Invoke-Command are always deserialized, which is why you have to go through these extra steps to do anything other than look at them. If you’ve got decent programming experience or you just don’t care about these sorts of things, you can skip ahead to the next section.

Serialization and deserialization are the methods that the .Net Framework, which is the underpinning of PowerShell, uses to first package objects for uses other than in-memory operations, and then to unpackage them later. There are lots of definitions out there for the term “object” in computer programming, but they are basically just containers. These containers hold only two things: memory addresses and an index of those memory addresses. The contents at those memory addresses are really just plain old binary 0s and 1s. It’s the indexes that give them meaning. How exactly that’s done from the developer’s view is dependent upon the language. So, in C++, you might find a “variable” defined as “int32”. This means that the memory location referenced by the index is 32 bits in length and the contents of those 32 bits should be considered an integer and that those contents can be modified. Indexes come in two broad types: data and code. In (a perhaps overly simplistic description of) .Net, data indexes are properties and refer to constants and variables. Code indexes can be either methods or events, and refer to functions.

As long as the objects are in memory, all of this works pretty much as you’d expect. If you send a read or write operation to a data index, then the contents of memory that it points to are retrieved or changed, respectively. If you (or, for an event, the system) call on a code index, then the memory contents it refers to are processed as an instruction set.

What happens if you want to save the object, say to disk? Well, you probably don’t care about the memory locations. You just want their contents. As for the functions and events, those have no meaning once the object is stored. So, what has to happen is all the code portions need to be discarded and the names of the indexes need to be paired up with the contents of the memory that they point to. As mentioned earlier, the .Net Framework does this by a process called serialization. Once an object is serialized, it can be written directly to disk. In our case, though, the object is being transmitted back to the system that called Invoke-Command .  Once there, it is deserialized so that its new owning system can manipulate it in-memory like any other object. However, because it came from a serialized object, its structure looks different than the original because it isn’t the same object.

You’ll notice that all the events are gone. The only methods are GetType() and ToString(), which are part of this new object, not carried over from the original, and are here because they exist on every PowerShell object. Properties that contained complex objects have also been similarly serialized and deserialized.

Using Saved Credentials and Multiple Sessions

Of course, what puts the power into PowerShell is automation. Automation should mean you can “set it and forget it”. It’s tough to do that if you have to manually enter information into Get-Credential, isn’t it?

There’s also the problem of multiple credential sets. Hopefully, if you’ve got more than one host that sits outside your domain, they’ve each got their own credentials. I know that some people out there put Hyper-V systems in workgroup mode “to protect the domain” but then use a credential set with the same user name and password as their domain credentials. It’s no secret that I see no value in workgroup Hyper-V hosts when a domain is available except for perimeter networks, but if you’re going to do it, at least have the sense to use unique user names and passwords. Sure, it can be inconvenient, but when you actively choose the micro-management hell of workgroup-joined machines, you can’t really be surprised when you find yourself in micro-management hell. Fortunately for you, PowerShell Remoting can take a lot of the sting out of it.

The first step is to gather the necessary credentials for all of your remote machines and save them into disk files on the system where you’ll be running Invoke-Command. For that bit, I’m just going to pass the buck to Lee Holmes. He does a great job explaining both the mechanism and the safety of the process.

Once you have the credentials stored in variables, you next create individual sessions to the various hosts.

12$SVHV1Session = New-PSSession -ComputerName svhv1 -Credential $SVHV1Credential$SVHV2Session = New-PSSession -ComputerName svhv2 -Credential $SVHV2Credential

You’re all set. You use the sessions like this:

1$RemoteVMs = Invoke-Command -Session $SVHV1Session, $SVHV2Session -ScriptBlock { Get-VM }

Of course, if you’ve only got one remote host but you want to use credentials retrieved from disk, you don’t need the sessions:

1$RemoteVMs = Invoke-Command -ComputerName svhv1 -Credential $SVHV1Credential -ScriptBlock { Get-VM }

One thing to remember though, is that sessions created with New-PSSession will persist, even if you close your PowerShell prompt. They’ll eventually time out, but until then, they’re like a disconnected RDP session. They just sit there and chew up resources, giving an attackers an open session to attempt to compromise, all for no good reason. If you want, you can reconnect and reuse these sessions. Otherwise, get rid of them:

12$SVHV1Session | Remove-PSSession$SVHV2Session | Remove-PSSession

Or, to close all:

1Get-PSSession | Remove-PSSession

For More Information

I’ve really only scratched the surface of PowerShell Remoting here. I had heard about it some time before, but I wasn’t in a hurry to use it because I was “getting by” with Hyper-V Manager and Remote Desktop connections. Ever since I spent a few minutes learning about Remoting, I have come to use it every single day. The ad hoc capabilities of Enter-PSSession allow me to knock things out quickly and the scripting powers of Invoke-Command are completely irreplaceable.

All that, and I haven’t even talked about delegated administration (allowing a junior admin to carry out a set of activities as narrowly defined as you like via a pre-defined PowerShell Remoting session) or implicit remoting or setting up “second hop” powers so you can control other computers from within your remote session. For those things, and more, you’re going to have to do some research on your own. I recommend starting with PowerShell in Depth. Most of the general information, but not all, of what you saw in this article can be found in that book. That chapter does contain all the things I teased about, and more.

Question: VMs with PowerShell SCVMM or Hyper-V Manager

Hyper-V Manager, SCVMM and PowerShell can all be used to create a Hyper-V VM, but if you want to configure certain parameters beforehand, you need to use SCVMM.

There are several ways to create VMs on Hyper-V virtualization hosts. The standard approach is to use Hyper-V Manager or System Center Virtual Machine Manager. However, many administrators like to use PowerShell cmdlets to quickly provision Hyper-V VMs. PowerShell is a very useful tool for when you need to deploy Hyper-V VMs in a development environment or when you need to perform VM creation tasks repeatedly.

Create Hyper-V VMs using Hyper-V Manager

Most Hyper-V administrators are familiar with the VM creation process using Hyper-V Manager. All you need to do is open Hyper-V Manager, right-click on a Hyper-V host in the list of available hosts, click on the New action, click on the Virtual Machine action and then follow the steps on the screen to create the VM. You’ll need to specify parameters, like VM name, VM generation and the path to store VM files.

Create Hyper-V VMs using SCVMM

Deploying VMs using System Center Virtual Machine Manager (SCVMM) is fairly simple. You can deploy VMs on a standalone Hyper-V host or in a Hyper-V cluster. You need to go to the VMs and Services workspace, right-click on a SCVMM host group and then click on the Create Virtual Machine action

When you click on the Create Virtual Machine action, SCVMM will open a wizard. All you need to do is follow the steps on the screen. One of the main advantages of SCVMM is that it allows you to configure VM parameters — including Dynamic Memory — before the actual creation process starts. Another benefit of using SCVMM is that you can quickly provision a VM by selecting a SCVMM template that already includes the required VM settings. SCVMM also provides greater flexibility when deploying VMs in a production environment.

When you provision VMs using SCVMM, SCVMM creates a PowerShell script on the fly and then executes it via the SCVMM job window. If you need to use the PowerShell script where SCVMM isn’t installed, you can copy the PowerShell script from the SCVMM job window and modify the Hyper-V host-related parameters.

Create Hyper-V VMs using PowerShell

Hyper-V offers the New-VM PowerShell cmdlet that can be used to create a VM on a local or remote Hyper-V host. It’s important to note that, before creating Hyper-V VMs using PowerShell, you’ll need to make some configuration decisions, as explained below:

  • Figure out the Hyper-V virtual switch to which the VM will be connected. You can get Hyper-V virtual switch names by executing the Get-VMSwitch * | Format-Table NamePowerShell command. The command will list all the Hyper-V virtual switches on the local Hyper-V host. Copy the Hyper-V virtual switch name to be used in the VM creation command.
  • Decide the type of memory configuration for the new VM. Are you planning to use static memory or Dynamic Memory? If you plan to use the Dynamic Memory feature, you’ll need to use the Set-VMMemory PowerShell cmdlet after creating the VM.
  • Identify the VM file path where VM files will be stored. It can be a local path, a path to the Cluster Shared Volumes disk in the Hyper-V cluster or a path to the Scale-Out File Server cluster.
  • Decide if you’d like the OS in the VM to be installed via a Preboot Execution Environment server running on the network or if you’d like to set up the OS from a DVD. Depending on the OS deployment type, you’d want to change the boot order of the VM.
  • Are you going to create a new VM on a local or remote Hyper-V host? If you’re going to create a VM on a remote Hyper-V host, get the Hyper-V server’s fully qualified domain name or IP address, and specify that value using the -ComputerName parameter in the New-VMPowerShell cmdlet.
  • Choose the generation of the VM. Generation 2 VMs provide new features, such as guest clustering, Hyper-V virtual hard disk (VHDX) online resizing, secure boot, fast boot and so on. I recommend you choose Generation 2 unless you have a reason to go for Generation 1 VMs.

Once you have gathered the required parameters for the new VM, use the PowerShell command below to create the VM on the Hyper-V host.

New-VM –Name SQLVM –MemoryStartupBytes 8GB –BootDevice VHD –VHDPath C:\ProductionVMs\SQLVM.VHDX –Path C:\ProductionVMs\VMFiles –Generation 2 –Switch ProductionSwitch

This command will create a VM by the name of the SQLVM on the local Hyper-V host. The new VM will be configured to use 8 GB of memory and will be stored in the C:\ProductionVMs folder. Note that -Generation 2 specifies that this VM will be created as a Generation 2 VM. If you want to change the new VM’s memory configuration from static to Dynamic Memory, use the PowerShell command below:

Stop-VM –Name SQLVM

Set-VMMemory SQLVM –DynamicMemoryEnabled $True –MinimumBytes 250MB –StartupBytes 500MB –MaximumBytes 8GB –Priority 70 –Buffer 20

These commands configure the SQLVM with Dynamic Memory and set the required parameters for Dynamic Memory.

Question: run powershell commands against a remote vm

Here’s a look at the Invoke-Command cmdlet and how it will be extended in Windows Server 2016.

PowerShell is one of Microsoft’s preferred tools for managing Windows Servers. Although it’s easy to think of PowerShell as a local management tool, PowerShell can just as easily be used to manage other servers in your datacenter. This capability is especially helpful if you have a lot of Hyper-V virtual machines and want to be able to perform bulk management operations.

There are a few different ways of running a PowerShell command against a remote server. For the purposes of this article however, I want to show you how to use the Invoke-Command cmdlet. The reason why I want to talk about this particular method is because the Invoke-Command cmdlet is being extended in Windows Server 2016 to provide better support for Hyper-V virtual machines. I will get to that in a few minutes.

The first thing that you will need to do is to configure the remote system to allow for remote management. Microsoft disables remote PowerShell management by default as a way of enhancing security.

To enable remote PowerShell management, logon to the remote server, open PowerShell (as an Administrator) and run the following command:

Enable-PSRemoting –Force

This command does a few different things. First, it starts the WinRM service, which is used for Windows remote management. It also configures the service to start automatically each time that the server is booted and it also adds a firewall rule that allows inbound connections. In case you are wondering, the Force parameter is used for the sake of convenience. Without it, PowerShell will nag you for approval as it performs the various steps. You can see what the command looks like in action in Figure 1.

[Click on image for larger view.]  Figure 1. You must use the Enable-PSRemoting cmdlet to prepare the remote server for management.

There are about a zillion different ways that you can use the Invoke-Command cmdlet. Microsoft provides full documentation for using this cmdlet here. This site covers the full command syntax in ponderous detail. For the purposes of this article however, I want to try to keep things simple and show you an easy method of running a command against a remote system.

The first thing that you need to know is that any time you are going to be running a PowerShell command against a remote system, you will have to enter an authentication credential. Although this step is necessary, it is a bit of a pain to enter a set of credentials every time you run a command. Therefore, my advice is to map your credentials to a variable. To do so, enter the following command:

$Cred = Get-Credential

As you can see in Figure 2, entering this command causes PowerShell to prompt you for a username and password. The credentials that you enter are mapped to the variable $Cred.

[Click on image for larger view.]  Figure 2. Your credentials can be mapped to a variable.

Now that your credentials have been captured, the easiest way to run a command against a remote server is by using the following syntax:

Invoke-Command –ComputerName <server name> -Credential  $Cred –ScriptBlock{The command that you want to run}

OK, so let’s take a look at how this command works. Right now I am using PowerShell on a system that is running Windows 8.1. I have a Hyper-V server named Hyper-V-4. Let’s suppose that I want to run the Get-VM cmdlet on that server so that I can find out what virtual machines currently exist on it. To do so, I would use this command:

Invoke-Command –ComputerName Hyper-V-4 –Credential $Cred  –ScriptBlock{Get-VM}

As you can see, the script block contains the command that needs to be executed on the remote system. It is worth noting that this technique only works if both computers are domain joined and are using Kerberos authentication. Otherwise, you will have to use the HTTPS transport or add the remote computer to a list of trusted hosts. The previously mentioned TechNet article contains instructions for doing so.

At the beginning of this article, I mentioned that Invoke-Command was being extended in Windows Server 2016 to better support Hyper-V virtual machines. Microsoft is adding a parameter named VMName (which is used in place of ComputerName). This extension makes use of a new feature called PowerShell direct, which allows you to run PowerShell commands on a Hyper-V virtual machine even if the virtual machine is not connected to the network. This is accomplished by communicating with the VM through the VMBus.

So as you can see, the Invoke-Command cmdlet makes it easy to manage remote servers through PowerShell. I would encourage you to check out the previously mentioned TechNet article because there is a lot more that you can do with the Invoke-Command cmdlet than what I have covered here.

Question: managing hyper-v server remotely through powershell

Working with PowerShell can be very common for daily tasks and Hyper-V Server management. However, as there is more than one server to be managed, sometimes it can be difficult to log on and run the PowerShell scripts (most of the time the same one) on different computers.

One of the benefits that PowerShell offers is the remote option that allows you to connect to multiple servers, enabling a single PowerShell window to administer as many servers as you need.

The PowerShell remote connection uses port 80, HTTP. Although the local firewall exception is created by default when it’s enabled, make sure that any other firewall has the exception to allow communication between your servers.

How to do it

These tasks will show you how to enable the PowerShell Remoting feature to manage your Hyper-V Servers remotely using PowerShell.

1. Open a PowerShell window as an administrator from the server for which you want to enable the PowerShell Remoting.

2. Type the Enable-PSRemoting commandlet to enable PowerShell Remoting.

3. The system will prompt you to confirm some settings during the setup. Select A for Yes to All to confirm all of them. Run the Enable-PSRemoting command on all the servers that you want to connect to remotely via PowerShell.

4. In order to connect to another computer in which the PowerShell Remoting is already enabled, type Connect-PSSession Hostname, where hostname is the computer name to which you want to connect.

5. To identify all the commands used to manage the PowerShell sessions, you can create a filter with the command Get-Command *PSSession*. A list of all the PSSession commands will appear, showing you all the available remote connection commands.

6. To identify which command lines from Hyper-V can be used with the remote option computername, use the Get-Commandwith the following parameter:

Get-Command –Module Hyper-V –ParameterName Computername

7. To use the remote PowerShell connection from PowerShell ISE, click on File and select New Remote PowerShell Tab. A window will prompt you for the computer name to which you want to connect and the username, as shown in the following screenshot. Type the computer name and the username to create the connection and click on Connect. Make sure that the destination computer also has the remoting settings enabled.


8. A new tab with the computer name to which you have connected will appear at the top, identifying all the remote connections that you have through PowerShell ISE. The following screenshot shows an example of a PowerShell ISE window with two tabs. The first one to identify the local connection called PowerShell 1 and the remote computer tab called HVHost.



The process to enable PowerShell involves the creation of a firewall exception, WinRM service configuration, and the creation of a new listener to accept requests from any IP address. PowerShell configures all these settings through a single and easy command—Enable-PSRemoting. By running this command, you will make sure that your computer has all the components enabled and configured to accept and create new remote connections using PowerShell.

Then, we identified the commands which can be used to manage the remote connections. Basically, all the commands that contain PSSession in them. Some examples are as follows:

· Connect-PSSession to create and connect to a remote connection

· Enter-PSSession to connect to an existing remote connection

· Exit-PSSession to leave the current connection

· Get-PSSession to show all existing connections

· New-PSSession to create a new session

Another interesting option that is very important, is to identify which commands support remote connections. All of them use the ComputerName switch. To show how this switch works, see the following example; a command to create a new VM is being used to create a VM on a remote computer named HVHost.

New-VM –Name VM01 –ComputerName HVHost

To identify which commands support the Computername switch, you saw the Get-Command being used with a filter to find all the commandlets. After these steps, your servers will be ready to receive and create remote connections through PowerShell.

Question: 12 steps to remotely manage hyper-v server 2012 core

Here are 12 steps to remotely manage Hyper-V Server 2012 Core. Have you setup a Microsoft Hyper-V Server 2012 Core edition and now you want to remotely manage it in a workgroup (non-domain) environment?

Hopefully I can help ease your frustration with this article by showing you what worked for me.

If Microsoft did one thing that really tested my patients it’s trying to remotely manage Hyper-V Server Core in a workgroup environment.

Not long ago, I wrote an article titled Remotely Mange Hyper-V Server 2012 Core but admit I lost steam with wanting to work with it after that article/video. I wasn’t very confident with those instructions because every time I tested it there seemed to be different results.

Earlier today I decided to tackle this one again because I have had a lot of questions on this topic. It appears a lot of you out there are having similar issues. I feel very confident this time that I have all the instructions tested and working.

12 Steps to Remotely Manage Hyper-V

Quick run-down

  • Server: Microsoft Hyper-V Server 2012 Core (Free Edition)
  • Client: Windows 8 Pro

This next section is what I’m calling the condensed (advanced) version.

Condensed (advanced) Version

Install Hyper-V Server 2012 Core and log in to the console.

  1. Configure date and time (select #9).
  2. Enable Remote Desktop (select #7). Also select the ‘Less Secure’ option.
  3. Configure Remote Management (select #4 then #1).
  4. Add local administrator account (select #3). Username and password need to be exactly the same as the account you are going to use on the client computer to manage this Hyper-V Server.
  5. Configure network settings (select #8). Configure as a static IP. Same subnet as your home network. Don’t forget to configure the DNS IP.
  6. Set the computer name (select #2). Rename the server and reboot.
  7. Remote Desktop to server. On your client machine, remote to the server via the IP address you assigned it. Use the credentials of the local administrator account you created earlier.
  8. Launch PowerShell. In the black cmd window, run the following command: start powershell
  9. Run the following commands:
    • Enable-NetFirewallRule -DisplayGroup “Windows Remote Management”
    • Enable-NetFirewallRule -DisplayGroup “Remote Event Log Management”
    • Enable-NetFirewallRule -DisplayGroup “Remote Volume Management”
    • Set-Service VDS -StartupType Automatic
  10. Reboot the server (select #12).
  11. Enable Client Firewall Rule. On your client machine, launch an elevated PowerShell prompt and type the following command:
    • Enable-NetFirewallRule -DisplayGroup “Remote Volume Management”
    • ii c:\windows\system32\drivers\etc
  12. Add server hostname and IP to hosts file. Right click hosts and select properties. In the security tab, add your username. Give your account modify rights.This is needed because some remote management tools we need to use rely on the hosts file to resolve the name. Without doing this you are highly likely to encounter some errors while trying to create VHDs and such. Error you might see: There was an unexpected error in configuring the hard disk.

There you have it: 12 steps to remotely manage Hyper-V Server 2012 Core.

You should now be able to remotely manage the Hyper-V server from the client machine. This includes managing the Hyper-V server’s disk from within the disk management console on the client. You should be able to create VHD’s successfully as well from within Hyper-V Manager on the client (assuming you installed the feature).

This was a quick tutorial on how to setup a working Hyper-V Server 2012 Core edition in a non-domain (workgroup) environment and still be able to remotely manage it.

Three Ways to Make a Powershell Query

Below are three ways in which you can make a powershell query

  1. Get-QADuser – Uses Quest cmdlet technology
  2. ADSI namespace – Uses System.DirectoryServices
  3. Find-LdapObject – Uses System.DirectoryServices.Protocols

Among the three ways, the third way is the fastest method.

Method of Query:

1. Using the Quest Active Directory CmdLet Get-QADuser:

$MigratedUsers=get-qaduser -ldapfilter “(attribute=value)”

2. Using the ADSI interface with the “System.DirectoryServices.DirectorySearcher” object:

$root = [ADSI]”LDAP://” $search = new-Object System.DirectoryServices.DirectorySearcher($root,”(attribute=value)”)

$MigratedUsers = $search.FindAll()

3. Find-LdapObject:

“System.DirectoryServices.Protocols“. Here is the link to the Microsoft website were you can download and save the modules locally and load it into powershell.

To load into powershell:

Add-Type -AssemblyName System.DirectoryServices.Protocols

Import-modules “C:\S.DS.P.psm1”

To use the cmdlet:

$MigratedUsers=Find-LdapObject -SearchFilter:”(attribute=value)” -SearchBase:”DC=Domain,DC=com” -LdapServer: “” -PageSize 500

Powershell – Updating Active Directory Objects Using from a CSV

Below are possible ways to update active directory objects using data contained in a CSV.

Without a CSV:

Get-ADUser Soma.Bright -Properties SamAccountName | Set-ADUser -Replace @{SamAccountName="Bright.Soma"}

Using a CSV with “identity”:

$file=import-csv "AccountInfo.csv"
foreach($dataRecord in $file) {


Get-ADUser -Identity $SamAccountName -Properties EmployeeType | Set-ADUser -Replace @{EmployeeType=$EmployeeType}

Get-ADUser -Identity $sAMAccountName | Set-ADUser -Department $Department

Using a CSV with a “filter”:

Import-module ActiveDirectory 
$userList= Import-Csv '.\List of Users.csv'
foreach ($userin$userList){ Get-ADUser -Filter "SamAccountName -eq '$($user.sAMAccountName)'"-SearchBase "DC=subdomain,DC=company,DC=com"-Properties Company |% { Set-ADUser $_-Replace@{Company = 'Deliveron'} } }If you then wanted to query AD for those users to make sure they updated correctly, you could use the following query using Get-ADUser:foreach ($userin$userList){

Get-ADUser -Filter "SamAccountName -eq '$($user.sAMAccountName)'"-SearchBase "DC=subdomain,DC=company,DC=com"-Properties Company | Select SamAccountName, Name, Company


Updating Multiple values:

 Import-Module ActiveDirectory  

$users = Import-Csv -Path c:\update.csv

foreach ($user in $users) {
Get-ADUser -Filter "employeeID -eq '$($user.employeeID)'" -Properties * -SearchBase "ou=Test,ou=OurUsers,ou=Logins,dc=domain,dc=com" |
Set-ADUser -employeeNumber $($user.employeeNumber) -department $($user.department) -title $($user.title) -office $($ -streetAddress $($user.streetAddress) -City $($user.City) -state $($user.state) -postalCode $($user.postalCode) -OfficePhone $($user.OfficePhone) -mobile $($ -Fax $($user.Fax) -replace @{"extensionAttribute1"=$user.extensionAttribute1; "extensionAttribute2"=$user.extensionAttribute2; "extensionAttribute3"=$user.extensionAttribute3}

For more info please see the below links:

Import-csv to update Active directory users

Update AD from CSV

Powershell – CSV – AD Attribute Update

AD / Import CSV Update Attributes Script

Import Users from a CSV File into Active Directory using PowerShell

REVISE: DNS Problem and Resolution

Question: Solving Other Common DNS Problems

This section lists several common DNS problems and explains how to solve them.

Event ID 7062 appears in the event log.

If you see event ID 7062 in the event log, the DNS server has sent a packet to itself. This is usually caused by a configuration error. Check the following:

  • Make sure that there is no lame delegation for this server. A lame delegation occurs when one server delegates a zone to a server that is not authoritative for the zone.
  • Check the forwarders list to make sure that it does not list itself as a forwarder
  • If this server includes secondary zones, make sure that it does not list itself as a master server for those zones.
  • If this server includes primary zones, make sure that it does not list itself in the notify list.

Zone transfers to secondary servers that are running BIND are slow.

By default, the Windows 2000 DNS server always uses a fast method of zone transfer. This method uses compression and includes multiple resource records in each message, substantially increasing the speed of zone transfers. Most DNS servers support fast zone transfer. However, BIND 4.9.4 and earlier does not support fast zone transfer. This is unlikely to be a problem, because when the Windows 2000 DNS Server service is installed, fast zone transfer is disabled by default. However, if you are using BIND 4.9.4 or earlier, and you have enabled fast zone transfer, you need to disable fast zone transfer.

To disable fast zone transfer

  1. In the DNS console, right-click the DNS server, and then click Properties .
  2. Click the Advanced tab.
  3. In the Server options list, select the Bind secondaries check box, and then click OK .

You see the error message “Default servers are not available.”

When you start Nslookup, you might see the following error message:

*** Can’t find server name for address <address> : Non-existent domain

*** Default servers are not available

Default Server: Unknown


If you see this message, your DNS server is still able to answer queries and host Active Directory. The resolver cannot locate the PTR resource record for the name server that it is configured to use. The properties for your network connection must specify the IP address of at least one name server, and when you start Nslookup, the resolver uses that IP address to look up the name of the server. If the resolver cannot find the name of the server, it displays that error message. However, you can still use Nslookup to query the server.

To solve this problem, check the following:

  • Make sure that a reverse lookup zone that is authoritative for the PTR resource record exists. For more information about adding a reverse lookup zone, see “Adding a Reverse Lookup Zone” earlier in this chapter.
  • Make sure that the reverse lookup zone includes a PTR resource record for the name server.
  • Make sure that the name server you are using for your lookup can query the server that contains the PTR resource record and the reverse lookup zone either iteratively or recursively.

User entered incorrect data in zone.

For information about how to add or update records by using the DNS console, see Windows 2000 Server Help. For more information about using resource records in zones, search for the keywords “managing” and “resource records” in Windows 2000 Server Help.

Active Directory-integrated zones contain inconsistent data.

For Active Directory–integrated zones, it is also possible that the affected records for the query have been updated in Active Directory but not replicated to all DNS servers that are loading the zone. By default, all DNS servers that load zones from Active Directory poll Active Directory at a set interval — typically, every 15 minutes — and update the zone for any incremental changes to the zone. In most cases, a DNS update takes no more than 20 minutes to replicate to all DNS servers that are used in an Active Directory domain environment that uses default replication settings and reliable high-speed links.

User cannot resolve name that exists on a correctly configured DNS server.

First, confirm that the name was not entered in error by the user. Confirm the exact set of characters entered by the user when the original DNS query was made. Also, if the name used in the initial query was unqualified and was not the FQDN, try the FQDN instead in the client application and repeat the query. Be sure to include the period at the end of the name to indicate the name entered is an exact FQDN.

If the FQDN query succeeds and returns correct data in the response, the most likely cause of the problem is a misconfigured domain suffix search list that is used in the client resolver settings.

Name resolution to Internet is slow, intermittent, or fails.

If queries destined for the Internet are slow or intermittent, or you cannot resolve names on the Internet, but local Intranet name resolution operates successfully, the cache file on your Windows 2000–based server might be corrupt, missing, or out of date. You can either replace the cache file with an original version of the cache file or manually enter the correct root hints into the cache file from the DNS console. If the DNS server is configured to load data on startup from Active Directory and the registry, you must use the DNS console to enter the root hints.

To enter root hints in the DNS console

  1. In the DNS console, double-click the server to expand it.
  2. Right-click the server, and then click Properties .
  3. Click the Root Hints tab.
  4. Enter your root hints, and then click OK .

To replace your cache file

  1. Stop the DNS service by typing the following at the command prompt:net stop dns
  2. Type the following:cd % Systemroot % \System32\DNS
  3. Rename your cache file by typing the following:ren cache.dns cache.old
  4. Copy the original version of the cache file, which might be found in one of two places, by typing either of the following:copy backup\cache.dns– Or –copy samples\cache.dns
  5. Start the DNS service by typing the following:net start dns

If name resolution to the Internet still fails, repeat the procedure, copying the cache file from your Windows 2000 source media.

To copy the cache file from your Windows   2000 source media

  • At the command prompt, type the following:
    expand <drive>:\i386\cache.dn_ % Systemroot % \system32\dns\cache.dns
    where drive is the drive that contains your Windows 2000 source media.

Resolver does not take advantage of round robin feature.

Windows 2000 includes subnet prioritization, a new feature, which reduces network traffic across subnets. However, it prevents the resolver from using the round robin feature as defined in RFC 1794. By using the round robin feature, the server rotates the order of A resource record data returned in a query answer in which multiple resource records of the same type exist for a queried DNS domain name. However, if the resolver is configured for subnet prioritization, the resolver reorders the list to favor IP addresses from networks to which they are directly connected.

If you would prefer to use the round robin feature rather than the subnet prioritization feature, you can do so by changing the value of a registry entry. For more information about configuring the subnet prioritization feature, see “Configuring Subnet Prioritization” earlier in this chapter.

WINS Lookup record causes zone transfer to a third-party DNS server to fail.

If a zone transfer from a Windows 2000 server to a third-party DNS server fails, check whether the zone includes any WINS or WINS-R records. If it does, you can prevent these records from being propagated to a secondary DNS server.

To prevent propagation of WINS lookup records to a secondary DNS server

  1. In the DNS console, double-click your DNS server, right-click the zone name that contains the WINS record, and then click Properties .
  2. In the Properties dialog box for the zone, click the WINS tab and select the check box Do not replicate this record.

To prevent propagation of WINS-R records to a secondary DNS server

  1. In the DNS console, double-click your DNS server, right-click the reverse lookup zone that contains the WINS-R record, and then click Properties .
  2. In the properties page for the zone, click the WINS-R tab and select the check box Do not replicate this record .

WINS lookup record causes a problem with authoritative data.

If you have a problem with incorrect authoritative data in a zone for which WINS lookup integration is enabled, the erroneous data might be caused by WINS returning incorrect data. You can tell whether WINS is the source of the incorrect data by checking the TTL of the data in an Nslookup query. Normally, the DNS service answers with names stored in authoritative zone data by using the set zone or resource record TTL value. It generally answers only with decreased TTLs when providing answers based on non-authoritative, cached data obtained from other DNS servers during recursive lookups.

However, WINS lookups are an exception. The DNS server represents data from a WINS server as authoritative but stores the data in the server cache only, rather than in zones, and decreases the TTL of the data.

To determine whether data comes from a WINS server

  1. At the command prompt, type the following:nslookup -d2server < server>where <server> is a server that is authoritative for the name that you want to test.This starts nslookup in user-interactive, debug mode and makes sure that you are querying the correct server. If you query a server that is not authoritative for the name that you test, you are not able to tell whether the data comes from a WINS server.
  2. To test for a WINS forward lookup, type the following:set querytype=a– Or –To test for a WINS reverse lookup, type the following:set querytype=ptr
  3. Enter the forward or reverse DNS domain name that you want to test.
  4. In the response, note whether the server answered authoritatively or non-authoritatively, and note the TTL value.
  5. If the server does not answer authoritatively, the source of the data is not a WINS server. However, if the server answered authoritatively, repeat a second query for the name.
  6. In the response, note whether the TTL value decreased. If it did, the source of the data is a WINS server.

If you have determined that the data comes from a WINS server, check the WINS server for problems. For more information about checking the WINS server for problems, see “Windows Internet Name Service” in this book.

A zone reappears after you delete it.

In some cases, when you delete a secondary copy of the zone, it might reappear. If you delete a secondary copy of the zone when an Active Directory-integrated copy of the zone exists in Active Directory, and the DNS server from which you delete the secondary copy is configured to load data on startup from Active Directory and the registry, the zone reappears.

If you want to delete a secondary copy of a zone that exists in Active Directory, configure the DNS server to load data on startup from the registry, and then delete the zone from the DNS server that is hosting the secondary copy of the zone. Alternatively, you can completely delete the zone from Active Directory when you are logged into a domain controller that has a copy of the zone.

You see error messages stating that PTR records could not be registered

When the DNS server that is authoritative for the reverse lookup zone cannot or is configured not to perform dynamic updates, the system records errors in the event log stating that PTR records could not be registered. You can eliminate the event log errors by disabling dynamic update registration of PTR records on the DNS client. To disable dynamic update registration, add the DisableReverseAddressRegistrations entry, with a value of 1 and a data type of REG_DWORD, to the following registry subkey:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services \Tcpip\Parameters\Interfaces\< name of theinterface >

where name of the interface is the GUID of a network adapter.

Question: What does “Default Server: Unknown” mean for Windows nslookup

8down votefavorite


I’d like to solve a question with your kind help, about nslookup on Windows. Please see my CMD commands below(run on WinXP SP2).

C:\>ipconfig /all

Windows IP Configuration

        Host Name . . . . . . . . . . . . : vchjXPsp3MUI
        Primary Dns Suffix  . . . . . . . :
        Node Type . . . . . . . . . . . . : Hybrid
        IP Routing Enabled. . . . . . . . : No
        WINS Proxy Enabled. . . . . . . . : No

Ethernet adapter LAN1-hostvn1:

        Connection-specific DNS Suffix  . :
        Description . . . . . . . . . . . : VMware Accelerated AMD PCNet Adapter

        Physical Address. . . . . . . . . : 00-0C-29-E0-68-00
        Dhcp Enabled. . . . . . . . . . . : Yes
        Autoconfiguration Enabled . . . . : Yes
        IP Address. . . . . . . . . . . . :
        Subnet Mask . . . . . . . . . . . :
        Default Gateway . . . . . . . . . :
        DHCP Server . . . . . . . . . . . :
        DNS Servers . . . . . . . . . . . :
        Primary WINS Server . . . . . . . :
        Lease Obtained. . . . . . . . . . : Wednesday, August 03, 2011 8:58:19 AM
        Lease Expires . . . . . . . . . . : Thursday, August 02, 2012 8:58:19 AM

Ethernet adapter LAN2-bridged:

        Media State . . . . . . . . . . . : Media disconnected
        Description . . . . . . . . . . . : VMware Accelerated AMD PCNet Adapter

        Physical Address. . . . . . . . . : 00-0C-29-E0-68-0A

C:\>ipconfig /flushdns

Windows IP Configuration

Successfully flushed the DNS Resolver Cache.

*** Can't find server name for address Non-existent domain
*** Default servers are not available
Default Server:  UnKnown

Server:  UnKnown



You can see that I have assigned a DNS server in my IP configuration, but WHY does nslookup spouts

*** Can't find server name for address Non-existent domain
*** Default servers are not available
Default Server: Unknown

What does it mean by saying “not available” and Unknown”.?

The DNS server( is working correctly because it answers query of as expected. The DNS server is a Win2003 SP2.

Some detail info:

> set debug
Server:  UnKnown

Got answer:
        opcode = QUERY, id = 4, rcode = NOERROR
        header flags:  response, auth. answer, want recursion, recursion avail.
        questions = 1,  answers = 0,  authority records = 1,  additional = 0

    QUESTIONS:, type = A, class = IN
    ->  dev.nls
        ttl = 3600 (1 hour)
        primary name server =
        responsible mail addr =
        serial  = 14716
        refresh = 900 (15 mins)
        retry   = 600 (10 mins)
        expire  = 86400 (1 day)
        default TTL = 3600 (1 hour)

Got answer:
        opcode = QUERY, id = 5, rcode = NOERROR
        header flags:  response, auth. answer, want recursion, recursion avail.
        questions = 1,  answers = 0,  authority records = 1,  additional = 0

    QUESTIONS:, type = A, class = IN
    ->  dev.nls
        ttl = 3600 (1 hour)
        primary name server =
        responsible mail addr =
        serial  = 14716
        refresh = 900 (15 mins)
        retry   = 600 (10 mins)
        expire  = 86400 (1 day)
        default TTL = 3600 (1 hour)



Any idea? Thank you.


Nslookup will try to resolve the name for the ip address of the DNS server configured as the primary DNS server on the client by performing a reverse lookup of the ip address. If you don’t have a rDNS zone set up for your network/subnet you’ll get the “server unknown” message as nslookup will be unable to resolve the name for the ip address.

It’s not an error condition and won’t cause any problems for normal AD and DNS operations.


Your server isn’t returning a reverse lookup for its name. That’s why you’re seeing “Unknown” there. You’ll need to create the appropriate reverse lookup zone to allow your server to reverse-resolve its own IP address back to its name.


Well, after adding reverse lookup to my internal DNS server, Default Server now show the domain name of my DNS server.

Sample output:

Default Server:

NOTE: If there are multiple names mapping to , Default Server will randomly display one of the names.

enter image description here

Question: NSLOOKUP: *** Can’t find server name… / Default Server: UnKnown

NSLOOKUP is a command line tool which comes with most operating systems and is used for querying DNS servers.

When NSLOOKUP starts, before anything else, it checks the computer’s network configuration to determine the IP address of the DNS server that the computer uses.

Then it does a reverse DNS lookup on that IP address to determine the name of the DNS server.

If reverse DNS for that IP address is not setup correctly, then NSLOOKUP cannot determine the name associated with the IP address.

On Windows Vista/2008, it then says “Default Server: UnKnown”.

On earlier Windows versions, it displays the error message “*** Can’t find server name for address …”.

This does NOT indicate a problem with the actual domain name that you are trying to look up.

It only means that there is no reverse DNS name for the DNS server IP address, which in most cases may not be a problem at all.

To fix this you need to properly configure the reverse zone for the IP address of the DNS server, and make sure that the reverse zone is properly delegated to the server by your IP provider. See the reference article below for more details.

To create a reverse zone in Simple DNS Plus, click the “Records” button, select “New” -> “Zone”, select “Reverse Zone…”, and follow the prompts.

Question: How to fix NSLOOKUP Default Server: UnKnown?

Issue : “Default Server:  UnKnown” error on NSLOOKUP from Windows Server 2008 DNS Server.

Note : To show the server name a Reverse DNS Zone should be configured. If you do not have a reverse DNS configured please look in to my below post which is related to reverse DNS configuration.

This issue is not a critical one. Even under this error your DNS resolution can work smoothly. But it’s embarrassing when there are issues like this. Right?. Yes I know! me too. 😀

The reason for this is your DNS server does not posses a record for the server itself. Or simply it does not know what is it’s own name. By creating a PTR static entry we can fix this and let DNS server know it’s own name.

1. Open the DNS management console in the Server 2008
        Start > Administrative Tools > DNS

2. Go to the your Reverse Lookup Zone icon and right click on it and select “New Pointer(PTR)“.

3. In the New PTR window enter the  IP address of DNS server and enter(or select) the host name of the server.

4. Now click OK and restart the DNS server service.

Now check to see if it is working.

Question: How to solve nslookup shows unknown for the default DNS server

when i built the Domain Controller with DNS role, i got the unknown default server result when using nslookup

Even the other organization machines in the domain use this server as a DNS server, nslookup still shown the same issue.

After a little investigation i found that, it`s not a critical issue Even under this error your DNS resolution can work smoothly, It means that there is no reverse DNS name for the DNS server IP address, which in most cases may not be a problem at all.

I found a solution says turning on the IPv6 on the NIC will solve this issue, but i don’t want to turn on IPv6, i just want the fix it in IPv4 protocol.

To fix this you need to properly configure the reverse zone for the IP address of the DNS server, and make sure that the reverse zone is properly delegated to the server by your IP provider.

So the problem is the “Reverse Lookup Zone”, the DNS server did not create a related “Reverse Lookup Zone” automatically, you should create it manually by yourself 

OK, found the root cause and let’s fix it

Right click on Reverse Lookup Zone, click on New Zone

Create a Primary Zone

Type in your “Network ID” which is your network subnet

Select the Reverse lookup zone, now you got the right name, next

do ipconfig \flushdns and check



When I do a nslookup, I get the response listed below:


Default Server:  UnKnown

Address:  ::1

As far as I can verify, EDNS0 is disabled, PTR records exist for the server in the zone. Also, on the server, if I uncheck the IPv6 protocol in the TCP/IP properties of the NIC, this issue goes away.


Check the IPv6 settings to obtain DNS server address automatically

Change the preferred DNS server from ::1 to obtain DNS server address automatically.

Question: DNS Server : nslookup response “Default Server Unknown

DNS Server : nslookup response “Default Server Unknown

Recently, I got a power Failure in my Data Center and face an Active Directory/DNS Crash as well. I configured Active Directory and DNS as well to support my users and organization and AD start replication but after a day I notice that on nslookup there was a message Default Server: Unknown. Ooops! I was really, really worried that what happens to it. Obviously, I start troubleshooting and the solution was much unexpected to me.

Solution 1:

You need to login your DNS Server and if you haven’t setup your reverse lookup zone, please do create it. If it’s already done than you need to create a PTR Record and point to Server (in my example). After creating PTR Record or configuring Reverse lookup zone, you will be able to see Server Name, as image below.

Solution 2:

In some cases, your DNS may behave differently like it shows exact Default Server Name but on giving any website name (, it show error message like “request timed out. timeout was 2 seconds” etc. in this case you have to check your Firewall on DNS Server or any Firewall between your computer and DNS Server and you have to allow your DNS Server.

Solution 3:

You may notice that even disabling Firewall on DNS Server, Local Computer or even allowing DNS Server in middle Firewall doesn’t help then you must check that your DNS Server have properly configured for Live DNS Server’s. Please double check Live DNS IP Address in DNS Forwarders and hope these tips will help you, please show yourself in comments to improve the post.

Question: Windows 2008/2012 DNS Nslookup request timed out // Default server: Unknown Address: ::1

Windows 2008/2012 DNS Nslookup request timed out // Default server: Unknown Address: :: 1

3 answers

If on a domain controller that is DNS you have the following error after running the nslookup command:

DNS Nslookup request timed out 

Timeout was 2 seconds. 

Default server: Unknown 

Address: :: 1

it’s just that your server is trying to interrogate (DNS) in IPV6

the bad solution is to disable the IPV6 in the settings of the network card:

If you disable IPV6 on your 2008/2012 server you lose the following features:

– Remote Assistance– Windows Meeting Space (P2P)– Homegroup– DirectAccess– Client Side Caching (offline files) and BranchCache (Windows Server 2008 R2 and Windows 7)


The good practice if you do not want to manage the ip V6 in the DNS, is to disable the IPV6 but on the DNS interface:

In the DNS console right-click on your server, then choose Properties

Here we see that IPV4 and IPV6 are enabled

select Only the following IP addresses , leave the address in IPV4 and click OK

I used a MS fix on my controller Domain to prefer ipV4 to ipV6

it should work afterwards

Question: nslookup returns server unknown

I am not sure if I have a reason to be uncomfortable but the results below do make me uncomfortable. Note that I do not have any problems accessing my network resources and the internet from any program. However…

Here is the ipconfig on my pc:

   Connection-specific DNS Suffix  . :

   Description . . . . . . . . . . . : Intel(R) Centrino(R) Advanced-N 6230

   Physical Address. . . . . . . . . : 88-53-2E-30-87-75

   DHCP Enabled. . . . . . . . . . . : Yes

   Autoconfiguration Enabled . . . . : Yes

   IPv6 Address. . . . . . . . . . . : fcee:4:7:9:0:6:3ede:f16d(Preferred)

   Lease Obtained. . . . . . . . . . : Saturday, February 02, 2013 1:32:46 PM

   Lease Expires . . . . . . . . . . : Thursday, February 14, 2013 1:32:45 PM

   Link-local IPv6 Address . . . . . : fe80::3039:a6e1:8fc:9bec%13(Preferred)

   IPv4 Address. . . . . . . . . . . :

   Subnet Mask . . . . . . . . . . . :

   Lease Obtained. . . . . . . . . . : Sunday, February 03, 2013 12:18:48 PM

   Lease Expires . . . . . . . . . . : Sunday, February 03, 2013 1:18:47 PM

   Default Gateway . . . . . . . . . :

   DHCP Server . . . . . . . . . . . :

   DHCPv6 IAID . . . . . . . . . . . : 327701294

   DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-18-4C-D3-C3-14-FE-B5-C1-FD-04

   DNS Servers . . . . . . . . . . . : fcee:4:7:9:10::




   NetBIOS over Tcpip. . . . . . . . : Enabled

   Connection-specific DNS Suffix Search List :


And here is the reason for concern (ran on my pc):


        Server:  UnKnown

        Address:  fcee:4:7:9:10::

        *** UnKnown can’t find No response from server

But if I specify explicitly any of the DNS servers:

        C:\Windows\system32>nslookup white



        Non-authoritative answer:


        Addresses:  2001:4998:f00b:1fe::3001









        C:\Windows\system32>nslookup skylark



        Non-authoritative answer:


        Addresses:  2001:4998:f00b:1fe::3001







Ping works fine (as any other app out there that I tried):


        Pinging [] with 32 bytes of data:

        Reply from bytes=32 time=59ms TTL=47

        Reply from bytes=32 time=72ms TTL=49

        Reply from bytes=32 time=121ms TTL=47

        Reply from bytes=32 time=105ms TTL=49

        Ping statistics for

            Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),

        Approximate round trip times in milli-seconds:

            Minimum = 59ms, Maximum = 121ms, Average = 89ms

More about the environment:

        – Very small domain – two AD servers (both Windows Server 2012), two computers (running Windows 8) and three or four devices (printer, phone, WiFi access point, etc.).

        – The network (DHCP, DNS, servers and gateway static addresses etc.) is both IPv4 and IPv6.

        – There are DNS running on both DC servers.

        – The servers have static IPv4 and 6 addresses.

        – The servers have both DNS addresses (both IPv4 and 6) in their IP configuration.

        – Single forward lookup zone – (and of course _msdcs…).

        – Two reverse lookup zones one for IPv4 and one for IPv6.

        – DHCP has the two DNS servers in the options.

Call me old-fashioned but I’ve been using “nslookup” to diagnose my network problems for years and now when it doesn’t answer unless i specify the dns server, makes me nervous. Am I right and if I am can you suggest possible problems in my configrations.


It’s only showing “unknown” for the IPv6 address.

Go into your IPv6 properties, and set the IP and DNS address settings to be obtained automatically.

Then in Manage network adapters windows, change the view options to show Menu, then click on Advanced, Advanced, and make sure IPv4 is on top instead of IPv6.


It’s only showing “unknown” for the IPv6 address.

Go into your IPv6 properties, and set the IP and DNS address settings to be obtained automatically.

Then in Manage network adapters windows, change the view options to show Menu, then click on Advanced, Advanced, and make sure IPv4 is on top instead of IPv6.


Everything was exactly the way you suggested… Then I played with the order: v6 before v4 just to try and see – got worse. Reversed back to v4 before v6 – and almost all is looking good with the exception of Server: UnKnown (like I said in the original post I do have rev. lookup zone). But this is something I can live with (unless you or somebody else has other suggestion). I am marking your reply as the solution.

Thank you very much!


I’m sure you’ve figured this out by now. Although the recommendation from ACE is correct there is possibly another issue. 

1. Open Network and Sharing Center

2. Change adapter settings.

3. Selection connection, right-click and choose properties of the network connection.

4. Double-click the IPv6 tcpip settings.

Determine if your IPv6 setting is as shown. If so, change it to “Obtain DNS server …”

To Get a List of AD Users With or Without a “Null”, “ALL”, and “Empty” Values or Attributes Using Powershell

To get a list of AD users with no mail attribute:

get-aduser| where {$_.mail -eq $null}

get-aduser -properties Company -filter {mail -notlike “*”}

get-aduser -properties Company -filter {Company -eq $null}

get-aduser -filter * -properties * | where {($_.mail -eq $null) -and ($_.manager -ne $null)}

get-aduser -filter * -properties * | where {($_.mail -eq $null) -and ($_.manager -ne $null)} | select name

NOTE: Running the opposite commands above will give you a list of the AD users with mail attribute e.g

Get-ADUser -Properties mail -Filter {mail -like ‘*’}

NOTE: The below command will give you an error because you cannot use $null and -eq in a filter parameter in powershell.

get-aduser -properties Company -filter {Company -eq $null}

Also, the below is known to fail:

get-aduser -properties Company -filter {Company -eq “” }

The list of active accounts with e-mail addresses:

Get-ADUser -Filter {(mail -ne “null”) -and (Enabled -eq “true”)} -Properties Surname,GivenName,mail | Select-Object Name,Surname,GivenName,mail | Format-Table


Wondering what a VLAN is? read up on the below information about VLAN.

Virtual Local Area Network (VLAN):

Definition – What does Virtual Local Area Network (VLAN)mean?

A virtual local area network (VLAN) is a logical group of workstations, servers and network devices that appear to be on the same LAN despite their geographical distribution. A VLAN allows a network of computers and users to communicate in a simulated environment as if they exist in a single LAN and are sharing a single broadcast and multicast domain. VLANs are implemented to achieve scalability, security and ease of network management and can quickly adapt to changes in network requirements and relocation of workstations and server nodes.

Higher-end switches allow the functionality and implementation of VLANs. The purpose of implementing a VLAN is to improve the performance of a network or apply appropriate security features.

Techopedia explains Virtual Local Area Network (VLAN)

Computer networks can be segmented into local area networks (LANs) and wide area networks (WANs). Network devices such as switches, hubs, bridges, workstations and servers connected to each other in the same network at a specific location are generally known as LANs. A LAN is also considered a broadcast domain.

A VLAN allows several networks to work virtually as one LAN. One of the most beneficial elements of a VLAN is that it removes latency in the network, which saves network resources and increases network efficiency. In addition, VLANs are created to provide segmentation and assist in issues like security, network management and scalability. Traffic patterns can also easily be controlled by using VLANs.

The key benefits of implementing VLANs include:

  • Allowing network administrators to apply additional security to network communication
  • Making expansion and relocation of a network or a network device easier
  • Providing flexibility because administrators are able to configure in a centralized environment while the devices might be located in different geographical locations
  • Decreasing the latency and traffic load on the network and the network devices, offering increased performance

VLANs also have some disadvantages and limitations as listed below:

  • High risk of virus issues because one infected system may spread a virus through the whole logical network
  • Equipment limitations in very large networks because additional routers might be needed to control the workload
  • More effective at controlling latency than a WAN, but less efficient than a LAN

REVISE: VLAN, Subnet, Subnetmask, Switch, Router and Gateway

The links below answers questions related to VLAN, Subnet, Subnetmask, Switch, Router, Gateway and CIDR:

Question: Does a VLAN require a different network?

I often see examples of VLANs being like so:

VLAN 10 – 192.168.10.x

VLAN 20 – 192.168.20.x

Isn’t this redundant? Why would I utilize both VLAN tags and different networks? I would expect to see the following kind of example when discussing VLANs:

VLAN 10 – 192.168.40.x

VLAN 20 – 192.168.40.x

Point being, VLAN tagging is independent of subnetting.

If using VLANs don’t require different subnets, what is the purpose of assigning different VLAN IDs to different subnets?

Why do I see examples like the first one above? What is the point? Seems like it’s just complicating the topic.


Here’s my mental block coming into play… are you saying my second example is technically invalid and therefore nonsensical?

No, your second example is fine, just don’t ever expect on LAN 10 to talk to a different on VLAN 20 as a router won’t know which one you are talking about, to a layer 3 router they are the same network but that’s exactly what I did to separate voice and data in one instance where they needed to use the same addresses and never needed to talk to each other.


From a switch point of view if you don’t have any vlans, different sub-nets on the same switch would work perfectly but traffic would go only between devices within the same sub-net.

If you have one vlan with different sub-nets it would again work perfectly fine within the switch.

With above scenario if traffic goes to a router it will be blocked/ignored/forwarded depend on the configuration of a router.

There are many reasons why you wouldn’t put many different sub-nets on one vlan. One of the reason is the whole point of using vlans to separate your network.


Each VLAN is an IP subnet,

So each VLAN shall have its IP address range which does not

interfere/overlap with other VLAN Subnets.

Question: VLAN VS Subnetting

Is separating a network by VLAN same thing as separating network by subnetting? I understand that by subnetting, I’m creating different networks that will have different network address.

If I separate a network with different VLANs, am I creating separate networks like in subnetting? If I use VLAN to create different broadcast domain, is subnetting necessary?


Normally, 1 IP subnet is associated with 1 layer 2 broadcast domain (VLAN).   Every useful VLAN (from an IP perspective) will have an IP network associated with it.


VLANs are for creating broadcast domains (different networks) at the L2 level. But only PCs on the same VLAN can communicate, unless you have a L3 switch or router, in which case, you will still have to subnet (give the VLANs IP addresses).


A switch will not allow you to place 2 vlan interfaces in the same subnet.  Remember that VLANs and Subnet are 1 and the same.

If you have a different subnet… then you need a different VLAN as long as you are not crossing a L3 boundary.


I believe I may have confused you… I am sorry I get crazy sometimes.  Let me re-phrase that.  You can ONLY use the same VLAN ID if you are separated by a L3 boundary (i.e. Router).  Each subnet has it’s own Broadcast address, for the subnet, the broadcast is, the broadcast messages will not travel outside this network.

Then on the other side of the router, will have its own broadcast address Broadcast messages will not travel outside the subnet.

For both of these subnets, because they are seperate by a L3 Boundary, both of these subnets can use the same VLAN IDs.


I would like to answer your question with the simplest way possible what you have asked is….

Is separating a network by VLAN same thing as separating network by subnetting? I understand that by subnetting, I’m creating different networks that will have different network address.

Yes separating a network by VLAN is same sort of concept what u achieve by subnetting the network . Yes you understanding about subnetting is correct. The only differece here is VLAN is about separating the network at Layer2 where as when u talk about subnetting you are talking about Layer3.

If I separate a network with different VLANs, am I creating separate networks like in subnetting? If I use VLAN to create different broadcast domain, is subnetting necessary?

Yes if you separate a network with different VLANs you are creating separate networks like in subnetting. If you use VLAN to create different broadcast domain , subnetting becomes necessary as a part of it as you can not configure the two VLAN’s with the same IP range.

Understand this concept like this … I believe you already know that LAYER 2 is a single broadcast domain, in order to limit the broadcast and let it be handled on a upper layer-3 the concept of VLAN’s came in. so now when you have VLAN broadcast it remains in the same VLAN dosent go out to the other VLAN. unless there is a layer 3 device avaliable to make it happen. So eventually your broadcast is limited to single VLAN. now if you have Layer3 device avaliable and it is configured to allow the communication between the different VLANs or the subnets only then the traffic propagates.

So the main concept behind all this is to limit the broadcast on layer three and make it appear as if working like layer3 to not allow all the broadcast everywhere.

Question: Ping different subnet in the same VLAN

I’m wondering if I have the following situation:

PC1:  =VLAN 100=== SWITCH=== VLAN100=   PC2:

If the PC’s are able to ping eachother? I checked this in packet tracer, but why isn’t it working. IF both PC are in a different subnet, but in the same VLAN, wouldn’t the switch broadcast the ping  to the other PC. Shouldn’t it workm adn why does it not work?


1. This seems to be your present situation:

VLAN - Mismatched IP - 1.png

2. When you try to ping from here is what happens:

On PC1:

Command – Ping

Logic working inside PC1

  • My network is
  • I have to ping
  • Do the first three digits match my network ?
  • Is 10.10.10.x equal to 192.168.1.x ?
  • No
  • Because it is a foreign network, I need to send this frame to my gateway
  • Do I have a gateway configured ?
  • No
  • Drop this packet as I cannot do anything about it

3. So the ping fails.

4. Let’s experiment.

VLAN - Mismatched IP - 2.png

5. Now the same objective, but with computers own IP configured as a gateway.

On PC1:

Command – Ping

Logic working inside PC1

Stage 1

This stage won’t be visible

  • My network is
  • I have to ping
  • Do the first three digits match my network ?
  • Is 10.10.10.x equal to 192.168.1.x ?
  • No
  • I need to send this frame to my gateway
  • Do I have a gateway configured ?
  • Yes
  • Can I get the MAC of the gateway ?
  • Yes. Got it.
  • Prepare a normal frame {My MAC – Gateway MAC | MY IP – Des. IP}
  • Send to gateway MAC – (own interface)

Stage 2

  • Now, information required at the gateway – destination MAC of
  • So, prepare ARP request frame { My MAC – (Gateway) | Dst. ff.ff.ff.ff.ff.ff.ff | Src. IP – Dst. IP}
  • This is sent out of the interface and replied by as it is

1. able to receive this arp request – on the same “switch / vlan” &

2. it has its own IP configured as gateway

Stage 3

  • Now all the information required for a valid frame is available, so sends out a valid frame to its gateway – i.e. its own network interface
  • This is sent on the wire, received by
  • Here it is analyzed as all the information matches – L2 and L3
  • A reply is prepared in the similar way and sent back to

Stage 4

Ping is successful now.

6. Apply the same logic for gateway address configured as the opposite PC – it will work too,

  • though the pings will now be successful among the immediate two PCs only
  • compared to all the PCs configured in a similar fashion in the previous case.

7. Framing – each stage – that is what you would need to work on.


 It is possible to aggregate different subnets in the same VLAN. From the point of view of the VLAN, they are in the same broadcast domain, but when a host needs to send a packet to another host, it does not care about VLANs. What a host cares about is whether the destination IP address is in the same subnet or not. If it is, it will send the packet right away to the destination IP address (after knowing the destination MAC address to put in the Ethernet frame); if it isn’t, then the host will try to send the packet to its default gateway. When you have hosts in different subnets, you need a routing capable device (router, L3 switch) to route the packet between the subnets.

     In your case, the two hosts are in different subnets, that’s why the ping doesn’t work. Try adding default gateway and a router into your topology and the ping might work.

     The bottom line is there are two different process you should separate here:

     1. VLAN segmentation;

     2. Packet forwarding decision (“do I send it to the destination directly or do I send it to my default gateway?).


1. I am afraid that will not be possible on a layer 2 switch without routing capabilities.

2. When the frame reaches the switch svi, it will need to be routed to a different network.

3. I understand that 2960s have a limited L3 routing capabilities. However, L2 switches like 2950s do not.

4. Even on a 2960, the following needs to be configured, in addition to multiple IP addresses for the SVIs.

Ip routing

*ip route

*ip route

5. Therefore from a L2 point of view, multiple IP addresses on a SVI will still not allow mismatched network reachability, unless assisted by routing.


When two hosts are on the same L3 broadcast domain they can expect to freely pass frames to each other at L2 (no need for a gateway).  When two hosts on different L3 domains need to talk to each other they typically need to go through a router.  That is why they need ARP.  ARP helps a host map L3 addressing to L2 addressing. When they are on different L3 broadcast domains, most hosts will not ARP unless they have a gateway configured because the only MAC address a host will be interested in when trying to communicate with a host on another L3 domain is the MAC address of the gateway on its own L3 domain.

Take a Window PC as an example.  If you configured a NIC with an IP address and subnet mask but no gateway and then tried to ping a device on a different L3 network you would never see an ARP from the PC.  However, if you added a default gateway to the configuration and then tried to ping again you would see an ARP.  Take a look at this screen shot:


The first ping attempt results in failures and no arp occur because the host has nothing to ARP (no gateway).  Then I added a static route (giving the host a gateway to other networks) and then the second ping results in ARP requests from the PC to the gateway address.  In this case, I have no gateway active on but the PC doesn’t know this.  It just knows that it needs to send an ARP for so that it can send L2 frames to it.


Ok, so we can say that packet tracer is not showing me the default/correct behaviour.

So even if both PC’s are in a different IP subnet, but in the same VLAN, and given they use there own IP or the IP of the other PC as a gateway, the ARP process can built the frame and the ping will be succesfull, correct?


1. I can assure you, that is correct.

Question: Can Two PC in Different Subnet connected to each other communicate


2 different computers on 2 different subnets connected to the same layer 2 switch can ping each other… *IF* they are on the same VLAN.

You dont need a gateway. This is simply dependent on the network topology.


Two PCs on different subnets (VLANs) would NOT be able to ping each other unless there is a layer 3 device (i.e. a router). Recall from the ISO OSI reference model that layer 3 devices allow for interconnectivity between networks.


2 PC’s in the same VLAN on different Subnets CAN IN FACT ping each other. They are in the same Layer 2 broadcast domain.

Dont assume that just because you have 2 different subnets that you also have 2 separate VLANs.


Can you go into detail as to why this works?

I also tested this PT and I am unable to get it to work..

I’ll agree with simplyccna, They might be in the same VLAN but it does not always mean they are in the same network. I would think both PCs would have to be in the same subnet and network in order for them to ping.

If you have a PC that has a mask of with a network IP address and a PC that has a mask of and a network IP address of  then maybe the first PC would be able to ping the second PC, but I don’t think the second PC would be able to ping the first PC even if they are in the same VLAN. (I have not tested this so I son’t know, just a guess)

Question: 2 PCs connected to same switch but in diff network, why can’t communicate?

   I have a small question, though this types of questions are asked, but I am not getting the exact answer.


Two machines are connected to single switch Switch-A

IP of machine-1 is

& that of machine-2 is (another network)

Now My Question is that…

When I ping from machine 1 to machine 2…

This should be the process that I think…happens….

at machine 1: The ping command creates Packet -> Frames -> bits

at Switch -> from bits -> frames conversion is done, it checks for the destination MAC address it is available it send the data/frame to machine -2 i.e.

               Bits- >Frames -> send to another machine ->frame -> bits

at machine 2: bits->frames->packet and vice versa it should send the reply…

But in real scenario… this doesn’t work …


… why this doesn’t happen?

… at what level this fails… machine1, switch or machine 2 ? and why ?

… Switch considers MAC address & not the IP… so it should forward the data…is it right ?

Few things…

1. I know to communicate between n/ws Router is required…. ! but in switch case… MAC is considered & not the IP..!

2. No VLAN, No Router is considered for this example ! Plain n/w


“Guys I want u to explain the concept behind this.. its not about I  wanted to make it work…I agree that it doesn’t work…but the problem  again…the same… why not ????”

Ok so we go back to basics.

2 PC’s.

PC1 =

PC2 =

PC1 wants to ping PC2

The first thing it does is compare the IP address of PC2 with its own IP/subnetmask. It realises that PC2 is on another network.

PC1 checks its routing table to se if it has a route to PC2’s subnet. Most likely it does not. But it should have a default route.

So what PC1 does is do an ARP for the default gateways IP and gets the MAC of the default address. (if it doesnt have a default gateway it then drops the ping)

PC1 then encapsulates the ping (icmp) in a ethernet frame with a destination address = MAC address of default gateway.

So when the ethernet frame arrives at the L2 switch it forwards the frame to the default gateway. NOT PC2.


MAC address works on Data link layar i.e. Layar 2 and IP address works on Network layar i.e. Layar 3.

As per you ip addressing, & on different subnet.Yes i know both are on the same switch.But subnets are differenent so it won’t communicate without L3 device. i.e.  Router and L3 switch.

Because when two PC’s are on the same subnet and one PC is trying to ping, it sends ARP and after resolving ARP  it will send ICMP packet which is working on L3 layar and ping success because its on one subnet

In your case subnets are different.ARP is not resolved as it sends to gateway.but L3 device is not connected so it wont work.


This fails at machine 1. Machine 1 has it’s own routing table based upon the IP and subnetmask you have assigned it. When you try to ping a PC outside of it’s network (that has been caculated from the IP and subnetmask) it will automatically send the information the the configured gateway of machine 1.

It dosent matter that they are connected to the same L2 device, just being connected to the same L2 device does not mean that they will be able to communicate using ARP. Machine 1 will send an ARP request for the gateway (explained above) not machine 2.

Check out the routing table on my laptop. If i want to communicate with something inside then it will send an ARP request for that destination, but if it’s outside of that it will send an ARP request for my gateway and forward the data there.

netstat -rn
Routing tables

Destination        Gateway            Flags        Refs      Use   Netif Expire
default          UGSc            2        0     en1
127                UCS             0        0     lo0          UH              6   186490     lo0
169.254            link#6             UCS             0        0     en1
192.168.0/16       link#6             UCS             2        0     en1        link#6             UHLWI           2        0     en1          UHS             0        0     lo0    ff:ff:ff:ff:ff:ff  UHLWbI          0        6     en1
Question: What are the reasons for not putting multiple subnets on the same VLAN?
1down votefavorite
I would like to know why we do not (and should not I guess) use 2 different networks on the same LAN/vLAN. From what I tried and understood :Host in network A (ex: can talk to each otherAnd host in network B (ex: cant talk to each otherHost and network A cannot talk to host in network B which is normal since inter-LAN communications need a L3 device with routing function.The idea/principle of a LAN/vLAN is, in the course I've followed, described as a broadcast domain. But I am confused since I can configure 2 working networks within the same LAN.I also tried the same configuration but with a second switch and a different vlan number (SW1 with vlan 10 and SW2 with vlan 20). All ports of each switch are in access mode with vlan 10 and 20 respectively. I had the same result.Note : each side of the topology has a host from network A and B 
Now, nobody does that and I supposed it is for some goods reasons, but I did not find what are those reasons and that is what I am asking you ?
2down voteaccepted
Answer:There's really no reason not to put multiple subnets on the same VLAN, but there's also probably no reason to do it.Pro:Allows the subnets to talk directly without a router or firewallSave's VLANsCon:Allows the subnets to talk directly without a router or firewallIt's messy from a documentation and troubleshooting perspectiveMore broadcast trafficWe generally don't do it because of the messiness and lack of security. One VLAN = one subnet is easier to document and easier to troubleshoot and there's usually not a good reason to complicate things.The only reason I can think of to do it is company mergers or network upgrades and for both of those I'd prefer it to be temporary.Edit to clarify, for the hosts on different subnets but the same VLAN to talk directly you'd need to either make them their own default gateway or add a route to the "other" subnet that connects it to the interface.In the gateway case if the host IP was then the gateway would also be This will cause the host to ARP for everything on or off it's subnet. This would allow it to talk to the second subnet on that VLAN but the only way it'll be able to talk to anything else is if there's a router/firewall running proxy arp that can help it out.
In the route out the interface case you'd add something like "route add -net netmask eth0" to the device and then will ARP directly on eth0 when it wants to reach 192.56.76.*.
Your first "pro" is incorrect. If a node wants to send a packet to another node that is not on its subnet, it will send the packet to its default gateway instead (if the node has a routing table of its own, it will look in that table first). If there is no router available to the node, then it won't be able to send the packet. What you could do is have a router on a stick wtihout the router being VLAN capable/supporting tagging or using multiple physical interfaces.
Nope, it's correct, I just didn't mention that you'll need a change on the hosts to either route that subnet out that interface or make the host it's own default gateway. In either case the two hosts will talk directly without going through an L3 device
I just wanted to give a real world example of why you might want 2 subnets on one VLAN/LAN:
We have some offices that want non-NAT public addresses and some that want private IP addresses (10.x). By running 2 subnets on 1 VLAN, the users can plug a switch into the office's single ethernet port and have some devices privately IP'd and some publicly IP'd. This saves the admins time and wiring costs of having to run multiple connections to each office or switch links between VLANs anytime there is a change wanted by the end user.
Peter Green gave a good summary of some other pros and cons that I agree with.

Now, nobody does that
That statement is not true. Some admins do it, some don't. There are pros and cons to such a setup.
You can move stuff arround without reconfiguring the switchports.If you use ICMP redirects you can arrange for the bulk of data traffic to pass directly between hosts without hitting a router.One machine can have IPs on multiple subnets without requiring multiple NICs on the same machine or VLAN support on the end machine (afaict the latter is no problem on Linux but more of an issue on Windows).You save VLAN IDs.
More broadcast traffic.If there is a firewall in the L3 routing then people may think the hosts are isolated when they are not really.

Question: Networking fundamentals: Subnetting

Subnetting is the process of breaking a network into multiple logical sub-networks. An IPv4 address is comprised of four octets of eight bits or thirty-two bits total. Each octet is converted to decimal and separated by a dot for example: 11111111.11111111.11111111.00000000 =

The Subnet Mask allows the host to compute the range of the network it’s a part of, from network address to broadcast address. 

A device with an IP address of with a subnet mask of knows that the Network address is and the Broadcast Address is

Each place in the octet string represents a value:

128  64    32    16      8       4       2       1

1       1       1       1       1       1       1       1

When added together (128+64+32+16+8+4+2+1)= 255.

Network Class Ranges

Depending on what value is used, an IP represents a different class of network:


Most LAN networks use private IP addresses outlined here:


These addresses cannot be routed on the public Internet, but that is why the edge of the network will typically be using NAT (Network Address Translation) to translate the private IP addresses to public addresses. Using subnetting, one can split these private IP addresses to fit as many hosts as needed depending on the subnet mask that is used. The subnet mask divides the network portion (network bits) of the address from the host portion (host bits).

Typical Private Range Masks

Class A:



Class B:



Class C:



Cisco Meraki allows users to input subnet masks using CIDR notation which is an easier method of appending a subnet mask. If the subnet mask being used in a Class C network is, the CIDR notation would be /28 because the network portion (below in blue font) of the mask borrowed four bits from the host portion (red). The borrowed bits are in blue: =11111111.11111111.11111111.11110000

Question: Do switches know subnetting?

Question and Answer:

1. can switches read an IP address or can they understand IP address?

They actually can, but not in the context of your question. Since you’re asking about forwarding traffic with a Layer 2 switch, the answer is no – they don’t look into IP addresses.

I connected 3 PCs with different subnet mask to switch ports and they were not able to talk to each other, but when they are in the same network, they can communicate.

That’s exactly what subnetting is for!

=> I understand thats what subnetting is for. If those PCs were connected to a router, no doubt for me. Let me rephrase the question. Does subnetting work of above 3 PCs are connected to switch alone?

2. when they are in same network, switch doesn’t care whats the default gateway is, all 3 can talk to each other. is it normal working?

Yes, this is normal. A default gateway is used only when you want to move traffic out of the subnet.

if switches operate at layer 2 how they can do this?

Layer 2 switches look at MAC addresses. Let’s assume that PC1 has a MAC address of 1111.1111.1111, and that of PC2 is 2222.2222.2222

When PC1 wants to talk to PC2, it first checks if PC2 is within its own (PC1’s) subnet. If it is, PC1 sends a broadcast ARP request asking “who is”. All hosts within the broadcast domain receive this query, process it and discard it – all but PC2 that sees that someone is asking for its IP address. So PC2 sends an ARP reply saying “I’m the host in question, my MAC address is 2222.2222.2222”. Now PC1 can build a frame sending it from MAC address 1111.1111.1111 to 2222.2222.2222. The switch receives the frame, looks up the destination MAC address in its MAC table, and forwards the frame out the appropriate port. This is how the frame reaches PC2. Note that the switch did not look at the IP addresses!

When PC1 wants to talk to PC2, it first checks if PC2 is within its own (PC1’s) subnet.

how does PC1 checks if PC2 is within its own subnet, my doubt basically lies around this?

All hosts within the broadcast domain receive this query

=>I understand switches dont divide broadcast domains(except VLANs) and no VLANs configured here. Just 3 PCs are assigned with IP add. and gateways as below and connected to switch ports. There is no configuration done on switch. 

ip addr | subnet mask | default gateway




say if PC1 pings PC2, then here,do all PCs receive the query considering as one broadcast domain?

if the answer is NO, then based on which the PC/switch knows about their domain.

if the answer is YES, then plz look at my previous post’s question

Question and Answer:

Okay, now I got the point of your confusion.

I understand thats what subnetting is for. If those PCs were connected to a router, no doubt for me. Let me rephrase the question. Does subnetting work of above 3 PCs are connected to switch alone?

Yes it does. This is not about switches or routers, it’s really about the question: How does a PC decide to send a frame out of its NIC to whatever is connected?

The answer lies in building the frame. A PC (much like a router) will do a few recursive route lookups which should come to an outgoing interface in the end! If it doesn’t, the frame won’t leave the PC’s NIC.

When PC1 wants to talk to PC2, it first checks if PC2 is within its own (PC1’s) subnet.

how does PC1 checks if PC2 is within its own subnet, my doubt basically lies around this?

This is about binary math. Let’s take your example with PC1 ( and PC2 ( So PC1 wants to ping PC2. PC1 needs to decide if this is local communication or not, i.e. if PC2 is within PC1’s subnet or not. Let’s look at the last octet.

6  = 00000110

10 = 00001010

The blue part belongs to subnet, whereas the green part belongs to host. As we can see, the subnet part is different for and which means that in order to access PC2, PC1 needs to go to its default gateway which is your case Now let’s imagine that there is no default gateway configured on PC1. In this case the frame cannnot be built because PC2 is in another subnet. No frame – no communication, i.e. data won’t go out the PC1’s NIC. Note that it doesn’t matter what is connected to PC1 – a router, a switch, or a directly connected PC2 – the frame won’t go there as the packet cannot be encapsulated.

say if PC1 pings PC2, then here,do all PCs receive the query considering as one broadcast domain?

The ARP broadcast asking about won’t go to the switch because PC2 is not within PC1’s subnet, so no one will receive it. In order to send a packet to PC2, PC1 will have to build a frame with DG’s MAC address as the destination. This means that PC1 will send an ARP request broadcast for, and not This broadcast will be received by everyone (PC2, PC3), as everyone must react to frames sent to FF:FF:FF:FF:FF:FF (the broadcast MAC address).

if the answer is NO, then based on which the PC/switch knows about their domain.

Based on the subnet mask and binary math, as explained above.

Question and Answer:

When PC1 wants to talk to PC2, it first checks if PC2 is within its own (PC1’s) subnet.

how does PC1 checks if PC2 is within its own subnet, my doubt basically lies around this?

This is about binary math. Let’s take your example with PC1 ( and PC2 ( So PC1 wants to ping PC2. PC1 needs to decide if this is local communication or not, i.e. if PC2 is within PC1’s subnet or not. Let’s look at the last octet.

6  = 00000110

10 = 00001010

The blue part belongs to subnet, whereas the green part belongs to host. As we can see, the subnet part is different for and which means that in order to access PC2, PC1 needs to go to its default gateway which is your case Now let’s imagine that there is no default gateway configured on PC1. In this case the frame cannnot be built because PC2 is in another subnet. No frame – no communication, i.e. data won’t go out the PC1’s NIC. Note that it doesn’t matter what is connected to PC1 – a router, a switch, or a directly connected PC2 – the frame won’t go there as the packet cannot be encapsulated.

say if both PCs are in the same subnet, here also we use the same IP address. If PC1 compares the IP address like above, it will still show as different but here they are in same subnet

PC1 (

PC2 (

Do PCs look at IP address for determining the subnet? This is the first time I have come across this. Plz clarify on this. 

Question and Answer:

2 PCs in different subnet connected to router

PC1 pings PC2:

PC1->default gateway->router->PC2

Default gateway is a router, so if both PCs are connected to the same router at Layer 3 it looks like PC1->router->PC2.

2 PCs in different subnet connected to switch

PC1 pings PC2:

PC1->default gateway-> what happens next…?

Next the packet is routed according to the router’s (which is default gateway) routing table.

Question: Actual difference between VLAN and subnet

Question and answer:

A subnet is a layer 3 term. Layer 3 is the IP layer where IP addresses live.

A VLAN is a layer 2 term, usually referring to a broadcast domain. Layer 2 is where MAC addresses live.


On a cheap normal switch, there is just one single broadcast domain – the LAN – containing all the physical ports.

On a more expensive switch, you can configure each phycical port to belong to one or more virtual LANs (VLANs). Each VLAN has its own broadcast space and only other ports on the switch assigned to the same VLAN as you get to see your broadcasts.

Most commonly, broadcast traffic is used for ARP so that hosts can resolve physical hardware (MAC) addresses to IP addresses.

On the cheap normal switch, it’s totally possible to have two subnets (say, and living happily in the same broadcast domain (VLAN) but each will simply ignore each other’s layer 2 broadcast traffic because the other hosts are outside the expected layer 3 subnet. This means that anyone with a network sniffer like ethereal can sniff broadcast packets and discover the existence of the other subnet within the broadcast domain. If two VLANs were used instead then nobody with a sniffer could see broadcasts from VLANs that their port isn’t a member of.


VLAN is a logical local area network that contains broadcasts within it self and only hosts that belong to that vlan will see those broadcasts. Subnet is nothing more than an IP address range of IP addresses that help hosts communicate over layer 2 and 3.

Althought you can have more than one subnet or address range per VLAN, usually called a super scope, it is recommeded that VLANs and Subnets are 1 to 1. One subnet per VLAN.


Hi Nick, I’ve been having the same trouble after hearing about VLAN recently.. I am a beginner and Ill try to explain what I’ve understood .

In a LAN perspective, both VLAN and Subnet do the same job i.e. break a network into smaller networks thereby increasing the number of broadcast domains. What makes VLAN and subnet different is the way in which they do so. In VLANs, the network to which a host belongs to is decided by the interface to which it is connected (layer 2). In subnets, it is decided by the ip address assigned to the host (layer 3). It’s upto you to decide what you want to use.

Subnet plays a vital role in WAN perspective. 3 clients need 60 ip addresses each. Before subnet was introduced the service providers had to give 3 class C addresses, one for each client. But with subnet, the service provider can use 1 class C address to provide ip address for all the 3 clients (reduced wastage of ip addresses). VLAN has no role here..

NOTE : The sole purpose of subnet it to reduce the wastage of IP addresses. Increasing the number of broadcast domains is an added advantage. Whereas the sole purpose of a VLAN is just to increase the no of broadcast domains.


the common practice is assign a subnet per vlan; so each vlan will have unique subnet;

Question: how to connect 2 different network with different subnet mask


i have 2 different ISP viz airtel and bsnl and 4 different vlan 1)vlan1: - 2)vlan2: - 3) and vlan4 i want to have intranet within this network


You need a commercial router or maybe a consumer on with third party firmware like dd-wrt. You could also use a layer 3 switch if you really have vlans. The larger issue is going to be connecting to 2 ISP and how you plan to share them. That is more of a load balancer function.

Question: Multiple subnets on one VLAN??

I have a question about a network design i did for a project involving VLAN’s. The network had to have two LAN segments one for students and one for administrators. I decided to do this I would have two VLAN’s(one for the students and one for the admins).

There were several classrooms(approximately 50), each room was to have 24 student computers and 1 admin computer, I decided each classroom would be on its own subnet with the computers in each room being connected to a switch and then linked to the rest of the network by connecting to a central layer 3 switch. the links from the switches in the classrooms to the layer 3 switches for students would be on VLAN 1 and the links for the admins would be on VLAN 2.

I thought this was a good design, but when I presented it to my teacher, they said that it is not possible to have multiple subnets on one VLAN.

I thought that VLAN’s were more of a port assignment thing so it wouldn’t matter about the subnet information?

Can someone please help me here. Was I wrong? 

Can you only have one subnet per VLAN?


you can have multiple subnets on a VLAN, but it’s not a great idea and here’s why… routing. at some point each subnet will need to route to each other subnet and you’ll have to have multiple IPs on each interface to do it, one for each subnet on that VLAN.

imagine adding 50 ips to each VLAN interface on the router, one for each classroom. that’s gonna be annoying at the least to administer and downright dangerous when making changes to network config.

and at the end of the day, seperate VLANs are there for security and splitting broadcast traffic up. which of these is a VLAN for each room solving? switching already provides unicast security between computers on a VLAN. why not have a VLAN for each computer while you’re at it? (just taking it to an absurd extreme)

apologies if that isn’t clear. in a hurry. ask more questions if you feel like it….


You could consider using a supernet (ie one larger subnet)



Can you only have one subnet per VLAN? 

Yes, it is possible to have multiple subnets on a VLAN, just as you can have one subnet spread across multiple VLANs, the principle is the same. You are just creating broadcast domains.

Personally I don’t see any issue with what you have presented.

If you have Admin PCs on VLAN1 –

and Student PCs on VLAN2 –

You have presented two different network subnets though on two different VLANs? So how did he get the multiple subnets under the same VLAN from???


At King of Nowhere

thankyou for ur speedy reply

I have been trying to digest what you said so i didn’t replay instanstly

King of Nowhere writes…

at some point each subnet will need to route to each other subnet 

so it would be better to just create one large subnet for all the students?

seperate VLANs are there for security and splitting broadcast traffic up. 

I had the two VLANs so that I could create two network segments as required, using the same network devices(Switches etc.)

I thought that if I just had two seperate subnets (one for students and one for admin) I would need seperate switches for the admin and student computers?

this seems like an exspensive way of doing things, is this the usual practice in setting up such networks? or am i just confused about how the subnets connect?

which of these is a VLAN for each room solving?

did you mean subnet for each room?

At LoM:

thx for the suggestion

My original subnet i was given to use was /13 so there would be no problem with using more bits for host addresses i was just trying to create seperate subnets for each classroom

At Pseudo:

thx for the assistance

I had multiple subnets for the student VLAN as every classroom was its own subnet


so it would be better to just create one large subnet for all the students? 

yup as far i’m concerned. you’re looking at 50×24=1200 computers though so some segmentation would be advisable to cut down on broadcast traffic swamping the network. mind you 1200 computers are unlikely to all be in one building/floor so there would be a suitable division.

TCP/IPBandit writes…

I thought that if I just had two seperate subnets (one for students and one for admin) I would need seperate switches for the admin and student computers? 

not necessary. but still need a router if the two talk to each other at some point. could have servers with an ip address on each subnet and then keep them entirely seperate (no router). remember switches are layer 2 and couldn’t care less what traffic they carry.

TCP/IPBandit writes…

did you mean subnet for each room? 

yup, oops.

one thing a VLAN does better than carrying two subnets on the same VLAN (broadcast domain) is DHCP. because DHCP servers are located using broadcasts, it is trivial to hand out the correct subnet to the workstation based on it’s VLAN. carrying multiple subnets, I can’t think of any other way but to reserve addresses for each MAC address. and we’re trying to reduce administration headaches after all…


I once worked at a large US company and looked after their Aussie network. When I joined I inherited a network design where the IP addressing scheme/subnet mask was already assigned to each country and I couldn’t change it. If I changed the subnet mask I would encroach on another countries IP addresses. 

The only way to use the allocated IP addresses was to have multiple VLANs which had multiple subnet’s attached to them.

Initially when I started they had a Cisco 3600 series router which was doing ‘router on a stick’. There was all sorts of network congestion and complaints from people about network performance.

After a while the Cisco router was relieved of its ‘router on a stick’ function and replaced by a Cisco 4000 series switch which was layer 3 switch. Once this was done all the network issues disappeared.

So to answer your question:

You can have mulitple subnets per VLAN. You can have multiple VLANs per switch. You need a router to route between VLANs either via external router if using a layer 2 switch or its inbuilt if you use a layer 3 switch.

In your design having the teachers on a seperate subnet to the students is a good idea as well.

Your design looks good. I think your teacher is wrong. Your teacher may be a bit behind in the technology.


I have a nice little access point sitting beside me a G-3000 H by ZyXEL I would put one in or near each classroom. This AP will do layer 2 seperation multiple SSID, Vlans Radius server inbuilt/external does 32 computers passwords ect per AP or uses external servers may not be what is wanted but have a look at the user handbook on the zyxel site this would have a lot less cables hubs, seperate the students , classrooms from each other and the administrators yet let the administrators get to the students if needed.


Am I missing something here? Why would you want to have multiple subnets on a single VLAN??

Why would you not create separate vlans for the classrooms if that is what you want?


mind you 1200 computers are unlikely to all be in one building/floor so there would be a suitable division. 

suitable division? as in the switches in the rooms?

remember switches are layer 2 and couldn’t care less what traffic they carry. 

aha brilliance thx, keep forgetting that, I think thats where i keep getting confused and thinking each port needs its own IP address to connect to the different subnets.

I can’t think of any other way but to reserve addresses for each MAC address. 

I don’t quite understand what you are saying in this paragraph, were you saying that DHCP wouldnt work because of all the subnetting? or that the VLAN would allow DHCP to work on the multiple subnets?

scoobydoosti writes…

Your design looks good. 

Is that my original design your talking about or the slightly modified one of having only two subnets (one for admins and one for Students)?

I think your teacher is wrong. Your teacher may be a bit behind in the technology. 

I spoke to my teacher again and they were saying that the trunk link wouldn’t work if there was multiple subnets on the same VLAN??

paulvk writes…

may not be what is wanted but have a look 

Thanks for the suggestion, tis good to know what options are out there


You can have multiple subnets on a single VLAN, but you will need to use alot of secondary addressing to get them to route between each other. It is generally considered bad practice, and not something you would use in the real world.

To get the admin workstations out of the way, I would put them all on a single subnet and VLAN.

The student workstations, I would group them in a logical order either by level or building (one VLAN/subnet per level or building if its single story).

You have to be careful when using one giant subnet to cover the whole lot. If someone creates a network loop you will wipe out the entire student network. Using VLAN’s and different subnets, you will reduce this effect to the VLAN the loop has been created in.

Once again, giant subnets are not something you would roll out these days.


Thanks for ur help Nik G

Nik G writes…

but you will need to use alot of secondary addressing 

are you talking about addressing on the routers/layer 3 switches to link the subnets?



If anyone can help me out with the questions in my last reply above that would be appreciated.


At LoM:
thx for the suggestion
My original subnet i was given to use was /13 so there would be no problem with using more bits for host addresses i was just trying to create seperate subnets for each classroom 

Have a look at Super VLANs


I think it’s already been mentioned but generally you will not put multiple subnets into a single VLAN (although it can be done). With a few exceptions it’s a pretty bad design when you’re not limited in a fashion that forces you to do so..

In your case, assuming each classroom has their own subnet, you will bring a “classroom” vlan in as well as the admin vlan.


All rooms get VLAN1 which is your Admin VLAN.

Classroom2 gets VLAN2 with subnet x.x.x.x

Classroom3 gets VLAN3 with subnet y.y.y.y

Classroom4 gets VLAN4 with subnet z.z.z.z

Now, without getting too complex in having multiple router modules (Layer 3 switches are so fast nowadays you can just as easily efficient route than switch), you pull all the VLANs into your one Layer 3 switch that can (if you want) route between the VLANs or control access however you want. 

From a trunking point of view..Let’s say Classroom2 and 3 are next to each other and share the same switch.

You would trunking VLAN1, 2 and 3 to this switch. Lets say Classroom 4 and 5 were in another building with another switch. You’d trunk 1,4 and 5 to that switch. You’d then break out whatever VLANs you need to break out..might require some additional switches after that…

So your professor is wrong in saying it’s not most definitely is…but it’s probably not a good idea since you’re not limited….Maybe that’s what he’s trying to convey to you…


Seeing as the network structure being asked for is a standard practice across all Victoria state schools (network separation between administration and curriculum segments), chances are that the teacher is working from a case study.

TCP/IPBandit writes…

Was I wrong? 

Regardless of the positibility of the multi-subnet solution being possible, it is still overkill.

If a contractor came to me with such a solution, they’d better have some valid reasons for suggestiong such a design.

Having investigated a similar multi-subnet solution for a school that I work for, I can honestly say that there any very few problems solved by the solution and a great deal of additional overheads.

Why do you want to implement such a solution? “Because I thought it would be cool” isn’t a good enough answer 😉


I thought this was a good design, but when I presented it to my teacher, they said that it is not possible to have multiple subnets on one VLAN. 

…an absolute crock of, well, you know what. VLANs are layer 2 (well, layer 2.5), but if you can run it over ethernet, you can run it over a VLAN. Hell, you can run IP/IPX/ARTNET side by side on the same VLAN if you want – it’s totally protocol independent.

That being said, supernetting all the machines so they’re on the one subnet wouldn’t be a bad idea, so long as you ensure you have broadcast trapping enabled on your switches (most switches will do this just dandily).

If you do decide to put everything on a different subnet (which does have its advantages), you’d be better off using layer 3 Cisco/Foundry equipment (all of which is fantastic equipment), and if you do need to do router on a stick, VLAN trunk to something that’s going to give some decent throughput (someone may crucify me here, but for static routes, it’s probably worth using MicroTik RouterOS – it’s price/performance ration is fantastic for doing this kind of thing).

It does raise the question though – in a school scenario, is there any need to route between subnets? Most students will only require access to the internet, file shares and any other network-delivered applications. This would mean they wouldn’t require inter-room routing, and would only need to see whatever subnet the internet gateway/fileservers etc. were sitting on. This also goes for a network that is using only Citrix.


Thx for the replies LoM, Polymer714 ,noonereallycares and Curtis Bayne, my knowledge is slowly expanding :).

From what I gather, you all seem to be saying that you can have multiple subnets on one VLAN but that having each classroom on its own subnet would just make administration much harder without providing too much benefit over a single large subnet for all the students.

I think i just thought that putting all the classrooms on their own subnets would help to minimize traffic on the network. I guess this isn’t needed though?

I’m still a little confused as to what my teacher was thinking when they said having multiple subnets on one VLAN wouldn’t work because of the trunk link? Could someone fill me in as to what they may have been thinking? 

I thought the trunk link was just a way of using one connection to transfer data for multiple VLANs?


I’m still a little confused as to what my teacher was thinking when they said having multiple subnets on one VLAN wouldn’t work because of the trunk link? Could someone fill me in as to what they may have been thinking? 

Your teacher is confused. The statement isn’t true.


The benefit of separating rooms by VLAN (especially in a school) would be to stop the spread of a broadcast virus.

When your teacher said it can’t be done he may have been thinking that you can’t route between the subnets, but you can with secondary IP addresses. But, AFAIK, this will break DHCP as the L3 interface won’t know which address to use as the giaddr for each specific client. i.e. when the client broadcasts for DHCP, it will hit the L3 interface but then it doesn’t know which address to use for the relay, the primary or the secondary. I am unsure on this though.

The best option would just be to have a separate VLAN for each room – if you are using a L3 switch. If it’s router-on-a-stick then you may saturate the uplink if there is a lot of inter-room communication. In this case analyse which room communicate with each other most often and group them in the same VLAN – but try not to put too many rooms together, to control broadcasts.

Are you limited to 2 VLANs?


Your teacher is confused. The statement isn’t true. 

Thx again LoM, but could you possibly expand on that?

Notam writes…

Are you limited to 2 VLANs? 

Thx for the reply, I didnt have to use VLANs at all, i just chose to as the design requirement was to have 2 LAN segments (one for admins and one for students)

can’t route between the subnets, but you can with secondary IP addresses. 

I havn’t really heard of secondary IP addresses before, but wouldn’t the layer 3 switch be able to handle routing between subnets? why would you need secondary IP addresses?


It could handle the routing but here’s the problem..and maybe it’s everyone’s misunderstanding.

Sounds like you have two vlans..

Vlan 1 – admin

Vlan 2 – students

Vlan 2 stretches across ALL the classrooms as does VLAN 1.

Vlan 2 has different subnets for each classroom so lets make it easy /24 /24 /24

For three different classrooms….

Now you have your VLAN on the Layer3 switch…How does it know about all three subnets? Lets change it up and add this. /24

and /24 

For five classrooms across a single VLAN. So how can your layer 3 switch route them? Well, you can configure secondary interfaces on the VLAN which is what has been suggested….


Each subnet has their own vlan, each is configured on the layer 3 switch and you can route between them.


Well, say you had and in the same VLAN. Which address will you use for the L3 interface on the switch?

If you use the first one, how is the RP going to know how to route to the second one? So, you configure 2 IPs on the interface, one for each of the subnets.

But don’t get caught up in this. You shouldn’t design like this. It’s usually just something that’s done if you run out of addresses for a particular subnet.



Thx for the replies

Polymer714 writes…

So how can your layer 3 switch route them? Well, you can configure secondary interfaces on the VLAN

Notam writes…

you configure 2 IPs on the interface, one for each of the subnets. 

I think this is where I am getting confused, I’m thinking that the VLAN is a port assignment, so if i had each class connected (with two connections; one for the admin and one for the students) to a different port on the layer 3 switch then the layer 3 switch could route between the subnets?

I guess im getting confused between layer 2 and layer 3?


I think this is where I am getting confused, I’m thinking that the VLAN is a port assignment 

Usually, VLANs are port assignments. If each classroom has a switch (an “edge” device), then you’d assign the ports on that switch to the appropriate VLAN. For example, the student VLAN is “10” and student PCs are plugged into ports 1-20 on the switch, you’d assign those ports to be VLAN 10. All these devices are now on the same broadcast domain. You can assign them whatever IP addresses you like, and so long as they are in the same subnet, the PCs will talk to each other – this is layer 2 functionality because the PCs are really just using ARP (Address Resolution Protocol) to match MAC addresses to IP addresses without the switch doing much more than forwarding “Who has X IP?” broadcasts. 

If you have a teacher VLAN, “20”, in the same room you assign the ports on the switch to VLAN 20 where teacher PCs are plugged in. PCs in VLAN 20 can’t talk to PCs in VLAN 10 without routing, as the “Who has X IP?” broadcasts are not passed across the VLAN boundary. To do this you need to route.

Now, at the core of your network, you’d be doing the routing. You get to the core from an edge device using a trunk port. A trunk port carries multiple VLANs across a single ethernet connection. To route, you also need to assign the VLAN interfaces an IP address (or addresses, you can happily have multiple addresses on one VLAN). This address becomes the default gateway for the PCs on that VLAN, and is learned by the core router’s routing table.

The routing table holds all the information the core knows about routes to various IP addresses. It knows how to get from VLAN 10 to 20, because both VLANs are directly attached via the trunk port. The core also maintains a MAC address lookup table, so it is able to say “MAC address Y exists on VLAN X, which is directly attached to Ethernet port Z, therefore I can send traffic to it”.

Yes, this does mean that to get from a teacher PC to a student PC in the same lab, you have to send the traffic up and down the same physical interface.

I hope that’s moderately clear. 🙂


I hope that’s moderately clear. 🙂 

Yes thanks Curtis that helped to clear some things up for me



I think i finally understand the problem:

Each class subnet would have a different network ID but each VLAN is only assigned one Network ID for routing(without secondary addresses). So when it comes to routing the layer 3 device would only know the one IP address of the VLAN and so would not be able to route to the different subnets unless the VLAN has multiple secondary IP addresses?

Am i on the right track here?


Nearly. 🙂

The layer 3 switch will be able to route between all VLANs, so long as the VLANs have IP addresses defined. This does include any secondary IP addresses. For example, if VLAN 10 is defined, in the layer 3 switch, as such:

int VLAN 10


ip address

ip address secondary


And VLAN 20 is defined as such:

int VLAN 20


ip address


The the layer 3 switch holds in it’s routing tables all the paths necessary to get to all three networks defined. The PCs in VLAN 10 can have IP addresses in either range and still be routed (assuming the default gateway on each PC is set for the correct subnet).


Thx again Curtis,

I think that is what I was trying to get at.

So for my design to work I would need to assign secondary IP addresses to my Student VLAN for every classroom subnet?


if you have lots more students than admin staff, i’d probably make a vlsm structure with lots of vlans, something like this

admin vlan: 50 ips required, give it a /26 mask, say

then make a vlan per classroom for the students to cut back on broadcast domain size: /27 mask,,, and so on.

then you can have dhcp running with ip helper-address configured on the subinterfaces on your router. the trunk links between switches will forward requests that go between classrooms and all admin traffic (via the router of course). you can implement access lists on the router to control the students access to the admin lan, 802.1x can be used to authenticate the admin puters incase some smarty pants student decides to plug into the admin port etc.. QoS can be applied to give the admin LAN more WAN bandwidth (for pr0n and torrents of course).. thats the way i would do it, others might have better ideas but i think this might be what your teachers are looking for in terms of your project. good luck.


Thx for the reply Krisso I shall keep your ideas in mind for my next network design


Yeah, you could go with a secondary IP address per classroom in the same VLAN, but I wouldn’t recommend it. For one, the security you’re attempting to gain would be wiped out with one smart student picking an IP and gateway in another classroom.

Krisso’s got the right idea. You want multiple VLANs – one per classroom – each with a small slice of a larger address plan. 

If you’re not using an allocated IP range (ie, your ISP has provided you with a block of IPs for your use only) and are doing NAT for general internet connectivity, it could be argued that simply using,,, etc would be easier to remember and administer. 802.1x is a pretty advanced concept, and I suspect a little out of the scope of what you’re trying to achieve. 🙂


Thx for your help Curtis,

I now know there are better ways of designing such a network,

It was more that I had already handed in my design to be marked (with the multiple subnets on the same VLAN) and my teacher was saying it couldn’t be done, so I wanted to find the answer to how it could be done so I wouldn’t be marked down.

Question: Single VLAN can support multiple subnets

I was reading cisco book where it says Sinle vlan can support multiple subnets.Because switch ports are configured for vlan  number only and not a network address any station connected to a poprt can present  ant subnet address range.

if someone can please explain me this with example.

Question and Answer: What is a VLAN?

VLANs (Virtual LANs) are logical grouping of devices in the same broadcast domain. VLANs are usually configured on switches by placing some interfaces into one broadcast domain and some interfaces into another. VLANs can be spread across multiple switches.

A VLAN acts like a physical LAN, but it allows hosts to be grouped together in the same broadcast domain even if they are not connected to the same switch.

The following topology shows a network with all hosts inside the same VLAN:

topology without vlan

Without VLANs, a broadcast sent from host A would reach all devices on the network. By placing interfaces Fa0/0 and Fa0/1 on both switches in a separate VLAN,  a broadcast from host A would reach only host  B, since each VLAN is a separate broadcast domain and only host B is inside the same VLAN as host A. This is shown in the picture below:

topology with vlan

Creating VLANs offers many advantages. Broadcast traffic will be received and processed only by devices inside the same VLAN. Users can be grouped by a department, and not by a physical location. VLANs provides also some security benefits, since the sensitive traffic can be isolated in a separate VLAN.

NOTE – to reach hosts in another VLAN, a router is needed.

Access & trunk ports

Each port on a switch can be configured as either an access or a trunk port. An access port is a port that can be assigned to a single VLAN. This type of interface is configured on switch ports that are connected to devices with a normal network card, for example a host on a network. A trunk interface is an interface that is connected to another switch. This type of interface can carry traffic of multiple VLANs.

Question and answer: Virtual Local Area Networks

Suba Varadarajan,

This paper describes virtual local area networks (VLAN’s) , their uses and how they work in accordance with the 802.1Q standard.

Other Reports on Recent Advances in Networking Back to Raj Jain’s Home Page

Table of Contents

1.0 Introduction

A Local Area Network (LAN) was originally defined as a network of computers located within the same area. Today, Local Area Networks are defined as a single broadcast domain. This means that if a user broadcasts information on his/her LAN, the broadcast will be received by every other user on the LAN. Broadcasts are prevented from leaving a LAN by using a router. The disadvantage of this method is routers usually take more time to process incoming data compared to a bridge or a switch. More importantly, the formation of broadcast domains depends on the physical connection of the devices in the network. Virtual Local Area Networks (VLAN’s) were developed as an alternative solution to using routers to contain broadcast traffic.

In Section 2, we define VLAN’s and examine the difference between a LAN and a VLAN. This is followed by a discussion on the advantages VLAN’s introduce to a network in Section 3. Finally, we explain how VLAN’s work based on the current draft standards in Section 4.

Back to Table of Contents

2.0 What are VLAN’s?

In a traditional LAN, workstations are connected to each other by means of a hub or a repeater. These devices propagate any incoming data throughout the network. However, if two people attempt to send information at the same time, a collision will occur and all the transmitted data will be lost. Once the collision has occurred, it will continue to be propagated throughout the network by hubs and repeaters. The original information will therefore need to be resent after waiting for the collision to be resolved, thereby incurring a significant wastage of time and resources. To prevent collisions from traveling through all the workstations in the network, a bridge or a switch can be used. These devices will not forward collisions, but will allow broadcasts (to every user in the network) and multicasts (to a pre-specified group of users) to pass through. A router may be used to prevent broadcasts and multicasts from traveling through the network.

The workstations, hubs, and repeaters together form a LAN segment. A LAN segment is also known as a collision domain since collisions remain within the segment. The area within which broadcasts and multicasts are confined is called a broadcast domain or LAN. Thus a LAN can consist of one or more LAN segments. Defining broadcast and collision domains in a LAN depends on how the workstations, hubs, switches, and routers are physically connected together. This means that everyone on a LAN must be located in the same area (see Figure1).


Figure 1: Physical view of a LAN.

VLAN’s allow a network manager to logically segment a LAN into different broadcast domains (see Figure2). Since this is a logical segmentation and not a physical one, workstations do not have to be physically located together. Users on different floors of the same building, or even in different buildings can now belong to the same LAN.


Physical View


Logical View

Figure 2: Physical and logical view of a VLAN.

VLAN’s also allow broadcast domains to be defined without using routers. Bridging software is used instead to define which workstations are to be included in the broadcast domain. Routers would only have to be used to communicate between two VLAN’s [ Hein et al].

Back to Table of Contents

3.0 Why use VLAN’s?

VLAN’s offer a number of advantages over traditional LAN’s. They are:

Back to Table of Contents

4.0 How VLAN’s work

When a LAN bridge receives data from a workstation, it tags the data with a VLAN identifier indicating the VLAN from which the data came. This is called explicit tagging. It is also possible to determine to which VLAN the data received belongs using implicit tagging. In implicit tagging the data is not tagged, but the VLAN from which the data came is determined based on other information like the port on which the data arrived. Tagging can be based on the port from which it came, the source Media Access Control (MAC) field, the source network address, or some other field or combination of fields. VLAN’s are classified based on the method used. To be able to do the tagging of data using any of the methods, the bridge would have to keep an updated database containing a mapping between VLAN’s and whichever field is used for tagging. For example, if tagging is by port, the database should indicate which ports belong to which VLAN. This database is called a filtering database. Bridges would have to be able to maintain this database and also to make sure that all the bridges on the LAN have the same information in each of their databases. The bridge determines where the data is to go next based on normal LAN operations. Once the bridge determines where the data is to go, it now needs to determine whether the VLAN identifier should be added to the data and sent. If the data is to go to a device that knows about VLAN implementation (VLAN-aware), the VLAN identifier is added to the data. If it is to go to a device that has no knowledge of VLAN implementation (VLAN-unaware), the bridge sends the data without the VLAN identifier.

In order to understand how VLAN’s work, we need to look at the types of VLAN’s, the types of connections between devices on VLAN’s, the filtering database which is used to send traffic to the correct VLAN, and tagging, a process used to identify the VLAN originating the data.

VLAN Standard: IEEE 802.1Q Draft Standard

There has been a recent move towards building a set of standards for VLAN products. The Institute of Electrical and Electronic Engineers (IEEE) is currently working on a draft standard 802.1Q for VLAN’s. Up to this point, products have been proprietary, implying that anyone wanting to install VLAN’s would have to purchase all products from the same vendor. Once the standards have been written and vendors create products based on these standards, users will no longer be confined to purchasing products from a single vendor. The major vendors have supported these standards and are planning on releasing products based on them. It is anticipated that these standards will be ratified later this year.

Back to Table of Contents

4.1 Types of VLAN’s

VLAN membership can be classified by port, MAC address, and protocol type.


Figure3: Assignment of ports to different VLAN’s.


Figure4: Assignment of MAC addresses to different VLAN’s.


Figure5: Assignment of protocols to different VLAN’s.


Figure6: Assignment of IP subnet addresses to different VLAN’s.

The 802.1Q draft standard defines Layer 1 and Layer 2 VLAN’s only. Protocol type based VLAN’s and higher layer VLAN’s have been allowed for, but are not defined in this standard. As a result, these VLAN’s will remain proprietary.

Back to Table of Contents

4.2 Types of Connections

Devices on a VLAN can be connected in three ways based on whether the connected devices are VLAN-aware or VLAN-unaware. Recall that a VLAN-aware device is one which understands VLAN memberships (i.e. which users belong to a VLAN) and VLAN formats.


Figure7: Trunk link between two VLAN-aware bridges.


Figure 8: Access link between a VLAN-aware bridge and a VLAN-unaware device.


Figure9: Hybrid link containing both VLAN-aware and VLAN-unaware devices.

It must also be noted that the network can have a combination of all three types of links.

Back to Table of Contents

4.3 Frame Processing

A bridge on receiving data determines to which VLAN the data belongs either by implicit or explicit tagging. In explicit tagging a tag header is added to the data. The bridge also keeps track of VLAN members in a filtering database which it uses to determine where the data is to be sent. Following is an explanation of the contents of the filtering database and the format and purpose of the tag header [802.1Q].


Figure10: Active topology of network and VLAN A using spanning tree algorithm.


Figure11: Ethernet frame tag header.


Figure12: Token ring and FDDI tag header.


Figure13: Tag control information (TCI).

Back to Table of Contents

5.0 Summary

As we have seen there are significant advances in the field of networks in the form of VLAN’s which allow the formation of virtual workgroups, better security, improved performance, simplified administration, and reduced costs. VLAN’s are formed by the logical segmentation of a network and can be classified into Layer1, 2, 3 and higher layers. Only Layer 1 and 2 are specified in the draft standard 802.1Q. Tagging and the filtering database allow a bridge to determine the source and destination VLAN for received data. VLAN’s if implemented effectively, show considerable promise in future networking solutions.

Back to Table of Contents

Question: virtual LAN (VLAN)

A VLAN (virtual LAN) abstracts the idea of the local area network (LAN) by providing data link connectivity for a subnet. One or more network switches may support multiple, independent VLANs, creating Layer 2 (data link) implementations of subnets. A VLAN is associated with a broadcast domain. It is usually composed of one or more Ethernetswitches.

VLANs make it easy for network administrators to partition a single switched network to match the functional and security requirements of their systems without having to run new cables or make major changes in their current network infrastructure. Ports (interfaces) on switches can be assigned to one or more VLANs, enabling systems to be divided into logical groups — based on which department they are associated with — and establish rules about how systems in the separate groups are allowed to communicate with each other. These groups can range from the simple and practical (computers in one VLAN can see the printer on that VLAN, but computers outside that VLAN cannot), to the complex and legal (for example, computers in the retail banking departments cannot interact with computers in the trading departments).

Each VLAN provides data link access to all hosts connected to switch ports configured with the same VLAN ID. The VLAN tag is a 12-bit field in the Ethernet header that provides support for up to 4,096 VLANs per switching domain. VLAN tagging is standardized in IEEE(Institute of Electrical and Electronics Engineers) 802.1Q and is often called Dot1Q.

When an untagged frame is received from an attached host, the VLAN ID tag configured on that interface is added to the data link frame header, using the 802.1Q format. The 802.1Q frame is then forwarded toward the destination. Each switch uses the tag to keep each VLAN’s traffic separate from other VLANs, forwarding it only where the VLAN is configured. Trunk links (described below) between switches handle multiple VLANs, using the tag to keep them segregated. When the frame reaches the destination switch port, the VLAN tag is removed before the frame is to be transmitted to the destination device.

Multiple VLANs can be configured on a single port using a trunk configuration in which each frame sent via the port is tagged with the VLAN ID, as described above. The neighboring device’s interface, which may be on another switch or on a host that supports 802.1Q tagging, will need to support trunk mode configuration in order to transmit and receive tagged frames. Any untagged Ethernet frames are assigned to a default VLAN, which can be designated in the switch configuration.

When a VLAN-enabled switch receives an untagged Ethernet frame from an attached host, it adds the VLAN tag assigned to the ingress interface. The frame is forwarded to the port of the host with the destination MAC address (media access control address). Broadcast, unknown unicast and multicast (BUM traffic) is forwarded to all ports in the VLAN. When a previously unknown host replies to an unknown unicast frame, the switches learn the location of this host and do not flood subsequent frames addressed to that host.

The switch-forwarding tables are kept up to date by two mechanisms. First, old forwarding entries are removed from the forwarding tables on a periodic basis, often a configurable timer. Second, any topology change causes the forwarding table refresh timer to be reduced, triggering a refresh.

The Spanning Tree Protocol (STP) is used to create loop-free topology among the switches in each Layer 2 domain. A per-VLAN STP instance can be used, which enables different Layer 2 topologies or a multi-instance STP (MISTP) can be used to reduce STP overhead if the topology is the same among multiple VLANs. STP blocks forwarding on links that might produce forwarding loops, creating a spanning tree from a selected root switch. This blocking means that some links will not be used for forwarding until a failure in another part of the network causes STP to make the link part of an active forwarding path.

The figure above shows a switch domain with four switches with two VLANs. The switches are connected in a ring topology. STP causes one port to go into blocking state so that a tree topology is formed (i.e., no forwarding loops). The port on switch D to switch C is blocking, as indicated by the red bar across the link. The links between the switches and to the router are trunking VLAN 10 (orange) and VLAN 20 (green). The hosts connected to VLAN 10 can communicate with server O. The hosts connected to VLAN 20 can communicate with server G. The router has an IPv4 subnet configured on each VLAN to provide connectivity for any communications between the two VLANs.

Disadvantages of VLAN

The limitation of 4,096 VLANs per switching domain creates problems for large hosting providers, which often need to allocate tens or hundreds of VLANs for each customer. To address this limitation, other protocols, like VXLAN (Virtual Extensible LAN), NVGRE(Network Virtualization using Generic Routing Encapsulation) and Geneve, support larger tags and the ability to tunnel Layer 2 frames within Layer 3 (network) packets.

Finally, data communications between VLANs is performed by routers. Modern switches often incorporate routing functionality and are called Layer 3 switches.

Question: Introductory level explanation of VLANs

What’s the basic use case(s) for VLANs?

What are the basic design principles?

I’m looking for something like a two paragraph executive summary style answer so I can determine if I need to learn about VLANs to implement them.


A VLAN (Virtual LAN) is a way of creating multiple virtual switches inside one physical switch. So for instance ports configured to use VLAN 10 act as if they’re connected to the exact same switch. Ports in VLAN 20 can not directly talk to ports in VLAN 10. They must be routed between the two (or have a link that bridges the two VLANs).

There are a lot of reasons to implement VLANs. Typically the least of these reasons is the size of the network. I’ll bullet list a few reasons and then break each one open.

  • Security
  • Link Utilization
  • Service Separation
  • Service Isolation
  • Subnet Size

Security: Security isn’t itself achieved by creating a VLAN; however, how you connect that VLAN to other subnets could allow you to filter/block access to that subnet. For instance if you have an office building that has 50 computers and 5 servers you could create a VLAN for the server and a VLAN for the computers. For computers to communicate with the servers you could use a firewall to route and filter that traffic. This would then allow you to apply IPS/IDS,ACLs,Etc. to the connection between the servers and computers.

Link Utilization: (Edit)I can’t believe I left this out the first time. Brain fart I guess. Link utilization is another big reason to use VLANs. Spanning tree by function builds a single path through your layer 2 network to prevent loops (Oh, my!). If you have multiple redundant links to your aggregating devices then some of these links will go unused. To get around this you can build multiple STP topology with different VLANs. This is accomplished with Cisco Proprietary PVST, RPVST, or standards based MST. This allows you to have multiple STP typologies you can play with to utilize your previously unused links. In example if I had 50 desktops I could place 25 of them in VLAN 10, and 25 of them in VLAN 20. I could then have VLAN 10 take the “left” side of the network and the remaining 25 in VLAN 20 would take the “right” side of the network.

Service Separation: This one is pretty straight forward. If you have IP security cameras, IP Phones, and Desktops all connecting into the same switch it might be easier to separate these services out into their own subnet. This would also allow you to apply QOS markings to these services based on VLAN instead of some higher layer service (Ex: NBAR). You can also apply ACLs on the device performing L3 routing to prevent communication between VLANs that might not be desired. For instance I can prevent the desktops from accessing the phones/security cameras directly.

Service Isolation: If you have a pair of TOR switches in a single rack that has a few VMWare hosts and a SAN you could create a iSCSI VLAN that remains unrouted. This would allow you to have an entirely isolated iSCSI network so that no other device could attempt to access the SAN or disrupt communication between the hosts and the SAN. This is simply one example of service isolation.

Subnet Size: As stated before if a single site becomes too large you can break that site down into different VLANs which will reduce the number of hosts that see need to process each broadcast.

There are certainly more ways VLANs are useful (I can think of several that I use specifically as an Internet Service Provider), but I feel these are the most common and should give you a good idea on how/why we use them. There are also Private VLANs that have specific use cases and are worth mentioning here.


As networks grow larger and larger, scalability becomes an issue. In order to communicate, every device needs to send broadcasts, which are sent to all devices in a broadcast domain. As more devices are added to the broadcast domain, more broadcasts start to saturate the network. At this point, multiple issues creep in, including bandwidth saturation with broadcast traffic, increased processing on each device (CPU usage), and even security issues. Splitting this large broadcast domain into smaller broadcast domains becomes increasingly necessary.

Enter VLANs.

A VLAN, or Virtual LAN, creates separate broadcast domains virtually, eliminating the need to create completely separate hardware LANs to overcome the large-broadcast-domain issue. Instead, a switch can contain many VLANs, each one acting as a separate, autonomous broadcast domain. In fact, two VLANs, can not communicate with each other without the intervention of a layer 3 device such as a router, which is what layer 3 switching is all about.

In summary, VLANs, at the most basic level, segment large broadcast domains into smaller, more manageable broadcast domains to increase scalability in your ever-expanding network.


VLANs are logical networks created within the physical network. Their primary use is to provide isolation, often as a means to decrease the size of the broadcast domain within a network, but they can be used for a number of other purposes.

They are a tool that any network engineer should be familiar with and like any tool, they can be used incorrectly and/or at the wrong times. No single tool is the correct one in all networks and all situations, so the more tools you can use, the better you are able to work in more environments. Knowing more about VLANs allows you to use them when you need them and to use them correctly when you do.

One example of how they can be used, I currently work in an environments where SCADA (supervisory control and data acquisition) devices are used widely. SCADA devices typically are fairly simple and have a long history of less than stellar software development, often providing major security vulnerabilities.

We have set the SCADA devices in their in a separate VLAN with no L3 gateway. The only access into their logical network is through the server they communicate with (which has two interfaces, one in the SCADA VLAN) which can be secured with it’s own host based security, something not possible on the SCADA devices. The SCADA devices are isolated from the rest of the network, even while connected to the same physical devices, so any vulnerability is mitigated.


In terms of design principles, the most common implementation is to align your VLANs with your organizational structure, ie Engineering folks in one VLAN, Marketing in another, IP phones in another, etc. Other designs include utilizing VLAN’s as “transport” of separate network functions across one (or more) cores. Layer 3 termination of VLANs (‘SVI’ in Cisco parlance, ‘VE’ in Brocade, etc) is also possible on some devices, which eliminates the need of a separate piece of hardware to do inter-VLAN communication when applicable.

VLANs become cumbersome to manage and maintain at scale, as you’ve probably seen cases of already on NESE. In the service provider realm, there’s PB (Provider Bridging – commonly known as “QinQ”, double tagging, stacked tag, etc), PBB (Provider Backbone Bridging – “MAC-in-MAC”) and PBB-TE, which have been designed to try to mitigate the limitation of the number of VLAN ID’s that were available. PBB-TE more aims to eliminate the need for dynamic learning, flooding, and spanning tree. There’s only 12 bits available for use as a VLAN ID in a C-TAG/S-TAG (0x000 and 0xFFF are reserved) which is where the 4,094 limitation comes from.

VPLS or PBB can be used to eliminate the traditional scaling ceilings involved with PB.


3down vote

The basic use case for VLANs is almost exactly the same as the basic use case for segmentation of the network into multiple data link broadcast domains. The key difference is that with a physicalLAN, you need at least one device (typically a switch) for each broadcast domain, whereas with a virtual LAN broadcast domain membership is determined on a port-by-port basis and is reconfigurable without adding or replacing hardware.

For basic applications, apply the same design principles to VLANs as you would for PLANs. The three concepts you need to know to do this are:

  1. Trunking – Any link that carries frames belonging to more than one VLAN is a trunk link. Typically switch-to-switch and switch-to-router links are configured to be trunk links.
  2. Tagging – When transmitting to a trunk link, the device must tag each frame with the numeric VLAN ID to which it belongs so that the receiving device can properly confine it to the correct broadcast domain. In general, host-facing ports are untagged, while switch-facing and router-facing ports are tagged. The tag is an additional part of the data link encapsulation.
  3. Virtual Interfaces – On a device with one or more trunk link interfaces, it is often necessary to attach, in the logical sense, the device as a link terminal to one or more of the individual VLANs that are present within the trunk. This is particularly true of routers. This logical link attachment is modeled as a virtual interface that acts as a port that is connected to the single broadcast domain associated with the designated VLAN.


1down vote

If I may offer one more piece of information, which might help.

To understand VLAN’s, you must also understand two key concepts.

-Subnetting – Assuming you want the various devices to be able to talk to one another (servers and clients, for example) each VLAN must be assigned an IP subnet. This is the SVI mentioned above. That enables you to begin routing between the vlans.

-Routing – Once you have each VLAN created, a subnet assigned to the clients on each VLAN, and an SVI created for each VLAN, you will need to enable routing. Routing can be a very simple setup, with a static default route to the internet, and EIGRP or OSPF network statements for each of the subnets.

Once you see how it all comes together, it is actually quite elegant.


The original use of a vlan was to restrict the broadcast area in a network. Broadcasts are limited to their own vlan. Later additional funtionality was added. However, keep in mind that vlan’s are layer 2 in for example cisco switches. You can add layer 2 by assigning an IP address to the port on the switch but this is not mandatory.

additional functionality:

  • trunking: use multiple vlan’s through one physical connection (ex: connecting 2 switches, one physical link is good enough to have a connection for all vlan’s, seperating the vlan’s is done by tagging, see: dot1Q for cisco)
  • security
  • easier to manage (ex: shutdown on a vlan doesn’t impact the other vlan’s connectivity…)

Question: What is a Virtual LAN (VLAN)?


A virtual LAN (Local Area Network) is a logical subnetwork that can group together a collection of devices from different physical LANs. Larger business computer networks often set up VLANs to re-partition their network for improved traffic management.

Several different kinds of physical networks support virtual LANs including both Ethernet and Wi-Fi.

Benefits of a VLAN

When set up correctly, virtual LANs can improve the overall performance of busy networks. VLANs are intended to group together client devices that communicate with each other most frequently. The traffic between devices split across two or more physical networks ordinarily needs to be handled by a network’s core routers, but with a VLAN that traffic can be handled more efficiently by network switchesinstead.

VLANs also bring additional security benefits on larger networks by allowing greater control over which devices have local access to each other. Wi-Fi guest networks are often implemented using wireless access points that support VLANs.

Static and Dynamic VLANs

Network administrators often refer to static VLANs as “port-based VLANs.”  A static VLAN requires an administrator to assign individual ports on the network switch to a virtual network.  No matter what device plus into that port, it becomes a member of that same pre-assigned virtual network.

Dynamic VLAN configuration allows an administrator to define network membership according to characteristics of the devices themselves rather than their switch port location. For example, a dynamic VLAN can be defined with a list of physical addresses (MAC addresses) or network account names.

VLAN Tagging and Standard VLANs

VLAN tags for Ethernet networks follow the IEEE 802.1Q industry standard. An 802.1Q tag consists of 32 bits (4 bytes) of data inserted into the Ethernet frame header. The first 16 bits of this field contain the hardcoded number 0x8100 that triggers Ethernet devices to recognize the frame as belonging to a 802.1Q VLAN. The last 12 bits of this field contain the VLAN number, a number between 1 and 4094.

Best practices of VLAN administration define several standard types of virtual networks:

  • Native LAN: Ethernet VLAN devices treat all untagged frames as belonging to the native LAN by default. The native LAN is VLAN 1, although administrators can change this default number.
  • Management VLAN: Used to support remote connections from network administrators. Some networks use VLAN 1 as the management VLAN while others set up a special number just for this purpose (to avoid conflicting with other network traffic)

Setting up a VLAN

At a high level, network administrators set up new VLANs as follows:

  1. Choose a valid VLAN number
  2. Choose a private IP address range for devices on that VLAN to use
  3. Configure the switch device with either static or dynamic settings.  Static configurations require the administrator to assign a VLAN number to each switch port while dynamic configurations require assigning a list of MAC addresses or usernames to a VLAN number.
  4. Configure routing between VLANs as needed. Configuring two or more VLANs to communicate with each other requires the use of either a VLAN-aware router or a Layer 3 switch.

The administrative tools and interfaces used vary greatly depending on the equipment involved.

Question: Virtual LAN


virtual LAN (VLAN) is any broadcast domain that is partitioned and isolated in a computer network at the data link layer (OSI layer 2).[1][2] LAN is the abbreviation for local area network and in this context virtual refers to a physical object recreated and altered by additional logic. VLANs work by applying tags to network packets and handling these tags in networking systems – creating the appearance and functionality of network traffic that is physically on a single network but acts as if it is split between separate networks. In this way, VLANs can keep network applications separate despite being connected to the same physical network, and without requiring multiple sets of cabling and networking devices to be deployed.

VLANs allow network administrators to group hosts together even if the hosts are not directly connected to the same network switch. Because VLAN membership can be configured through software, this can greatly simplify network design and deployment. Without VLANs, grouping hosts according to their resource needs necessitates the labor of relocating nodes or rewiring data links. VLANs allow networks and devices that must be kept separate to share the same physical cabling without interacting, improving simplicity, securitytraffic management, or economy. For example, a VLAN could be used to separate traffic within a business due to users, and due to network administrators, or between types of traffic, so that users or low priority traffic cannot directly affect the rest of the network’s functioning. Many Internet hosting services use VLANs to separate their customers’ private zones from each other, allowing each customer’s servers to be grouped together in a single network segment while being located anywhere in their data center. Some precautions are needed to prevent traffic “escaping” from a given VLAN, an exploit known as VLAN hopping.

To subdivide a network into VLANs, one configures network equipment. Simpler equipment can partition only per physical port (if at all), in which case each VLAN is connected with a dedicated network cable. More sophisticated devices can mark frames through VLAN tagging, so that a single interconnect (trunk) may be used to transport data for multiple VLANs. Since VLANs share bandwidth, a VLAN trunk can use link aggregationquality-of-service prioritization, or both to route data efficiently.


VLANs address issues such as scalability, security, and network management. Network architects set up VLANs to provide network segmentation. Routers between VLANs filter broadcast traffic, enhance network security, perform address summarization, and mitigate network congestion.

In a network utilizing broadcasts for service discovery, address assignment and resolution and other services, as the number of peers on a network grows, the frequency of broadcasts also increases. VLANs can help manage broadcast traffic by forming multiple broadcast domains. Breaking up a large network into smaller independent segments reduces the amount of broadcast traffic each network device and network segment has to bear. Switches may not bridge network traffic between VLANs, as doing so would violate the integrity of the VLAN broadcast domain.

VLANs can also help create multiple layer 3 networks on a single physical infrastructure. VLANs are data link layer (OSI layer 2) constructs, analogous to Internet Protocol (IP) subnets, which are network layer (OSI layer 3) constructs. In an environment employing VLANs, a one-to-one relationship often exists between VLANs and IP subnets, although it is possible to have multiple subnets on one VLAN.

Without VLAN capability, users are assigned to networks based on geography and are limited by physical topologies and distances. VLANs can logically group networks to decouple the users’ network location from their physical location. By using VLANs, one can control traffic patterns and react quickly to employee or equipment relocations. VLANs provide the flexibility to adapt to changes in network requirements and allow for simplified administration.[2]

VLANs can be used to partition a local network into several distinctive segments, for instance:[3]

A common infrastructure shared across VLAN trunks can provide a measure of security with great flexibility for a comparatively low cost. Quality of service schemes can optimize traffic on trunk links for real-time (e.g. VoIP) or low-latency requirements (e.g. SAN). However, VLANs as a security solution should be implemented with great care as they can be defeated unless implemented carefully.[4]

In cloud computing VLANs, IP addresses, and MAC addresses in the cloud are resources that end users can manage. To help mitigate security issues, placing cloud-based virtual machines on VLANs may be preferable to placing them directly on the Internet.[5]


After successful experiments with voice over Ethernet from 1981 to 1984, Dr. W. David Sincoskie joined Bellcore and began addressing the problem of scaling up Ethernet networks. At 10 Mbit/s, Ethernet was faster than most alternatives at the time. However, Ethernet was a broadcast network and there was no good way of connecting multiple Ethernet networks together. This limited the total bandwidth of an Ethernet network to 10 Mbit/s and the maximum distance between nodes to a few hundred feet.

By contrast, although the existing telephone network’s speed for individual connections was limited to 56 kbit/s (less than one hundredth of Ethernet’s speed), the total bandwidth of that network was estimated at 1 Tbit/s[citation needed] (100,000 times greater than Ethernet).

Although it was possible to use IP routing to connect multiple Ethernet networks together, it was expensive and relatively slow. Sincoskie started looking for alternatives that required less processing per packet. In the process he independently reinvented transparent bridging, the technique used in modern Ethernet switches.[6] However, using switches to connect multiple Ethernet networks in a fault-tolerant fashion requires redundant paths through that network, which in turn requires a spanning tree configuration. This ensures that there is only one active path from any source node to any destination on the network. This causes centrally located switches to become bottlenecks, limiting scalability as more networks are interconnected.

To help alleviate this problem, Sincoskie invented VLANs by adding a tag to each Ethernet frame. These tags could be thought of as colors, say red, green, or blue. In this scheme, each switch could be assigned to handle frames of a single color, and ignore the rest. The networks could be interconnected with three spanning trees, one for each color. By sending a mix of different frame colors, the aggregate bandwidth could be improved. Sincoskie referred to this as a multitree bridge. He and Chase Cotton created and refined the algorithms necessary to make the system feasible.[7] This color is what is now known in the Ethernet frame as the IEEE 802.1Q header, or the VLAN tag. While VLANs are commonly used in modern Ethernet networks, they are not used in the manner first envisioned here.

In 2003, Ethernet VLANs were described in the first edition of the IEEE 802.1Q standard.[8]

In 2012, the IEEE approved IEEE 802.1aq (shortest path bridging) to standardize load-balancing and shortest path forwarding of (multicast and unicast) traffic allowing larger networks with shortest path routes between devices. In 802.1aq Shortest Path Bridging Design and Evolution: The Architect’s Perspective David Allan and Nigel Bragg stated that shortest path bridging is one of the most significant enhancements in Ethernet’s history.[9]

Configuration and design considerations[edit]

This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed(June 2015) (Learn how and when to remove this template message)

Early network designers often segmented physical LANs with the aim of reducing the size of the Ethernet collision domain—thus improving performance. When Ethernet switches made this a non-issue (because each switch port is a collision domain), attention turned to reducing the size of the broadcast domain at the MAC layer. VLANs were first employed to separate several broadcast domains across one physical medium.

A VLAN can also serve to restrict access to network resources without regard to physical topology of the network, although the strength of this method remains debatable as VLAN hopping is a means of bypassing such security measures if not prevented. VLAN hopping can be mitigated with proper switchport configuration.[10]

VLANs operate at Layer 2 (the data link layer) of the OSI model. Administrators often configure a VLAN to map directly to an IP network, or subnet, which gives the appearance of involving Layer 3 (the network layer). In the context of VLANs, the term “trunk” denotes a network link carrying multiple VLANs, which are identified by labels (or “tags”) inserted into their packets. Such trunks must run between “tagged ports” of VLAN-aware devices, so they are often switch-to-switch or switch-to-router links rather than links to hosts. (Note that the term ‘trunk’ is also used for what Cisco calls “channels” : Link Aggregation or Port Trunking). A router (Layer 3 device) serves as the backbone for network traffic going across different VLANs.

A basic switch not configured for VLANs has VLAN functionality disabled or permanently enabled with a default VLAN that contains all ports on the device as members.[2] The default VLAN typically has the ID “1”. Every device connected to one of its ports can send packets to any of the others. Separating ports by VLAN groups separates their traffic very much like connecting each group using a distinct switch for each group.

It is only when the VLAN port group is to extend to another device that tagging is used. Since communications between ports on two different switches travel via the uplink ports of each switch involved, every VLAN containing such ports must also contain the uplink port of each switch involved, and traffic through these ports must be tagged.

Management of the switch requires that the administrative functions be associated with one or more of the configured VLANs. If the default VLAN were deleted or renumbered without first moving the management connection to a different VLAN, it is possible for the administrator to be locked out of the switch configuration, normally requiring physical access to the switch to regain management by either a forced clearing of the device configuration (possibly to the factory default), or by connecting through a console port or similar means of direct management.

Switches typically have no built-in method to indicate VLAN port members to someone working in a wiring closet. It is necessary for a technician to either have administrative access to the device to view its configuration, or for VLAN port assignment charts or diagrams to be kept next to the switches in each wiring closet. These charts must be manually updated by the technical staff whenever port membership changes are made to the VLANs.

Generally, VLANs within the same organization will be assigned different non-overlapping network address ranges. This is not a requirement of VLANs. There is no issue with separate VLANs using identical overlapping address ranges (e.g. two VLANs each use the private network However, it is not possible to route data between two networks with overlapping addresses without delicate IP remapping, so if the goal of VLANs is segmentation of a larger overall organizational network, non-overlapping addresses must be used in each separate VLAN.

Network technologies with VLAN capabilities include:[citation needed]

Protocols and design[edit]

The protocol most commonly used today to configure VLANs is IEEE 802.1Q. The IEEE committee defined this method of multiplexing VLANs in an effort to provide multivendor VLAN support. Prior to the introduction of the 802.1Q standard, several proprietary protocols existed, such as Cisco Inter-Switch Link (ISL) and 3Com‘s Virtual LAN Trunk (VLT). Cisco also implemented VLANs over FDDI by carrying VLAN information in an IEEE 802.10 frame header, contrary to the purpose of the IEEE 802.10 standard.

Both ISL and IEEE 802.1Q tagging perform “explicit tagging” – the frame itself is tagged with VLAN information. ISL uses an external tagging process that does not modify the Ethernet frame, while 802.1Q uses a frame-internal field for tagging, and therefore does modify the Ethernet frame. This internal tagging is what allows IEEE 802.1Q to work on both access and trunk links: standard Ethernet frames are used and so can be handled by commodity hardware.

IEEE 802.1Q[edit]

Main article: IEEE 802.1Q

Under IEEE 802.1Q, the maximum number of VLANs on a given Ethernet network is 4,094 (4,096 values provided by the 12-bit VID field minus reserved values 0x000 and 0xFFF). This does not impose the same limit on the number of IP subnets in such a network, since a single VLAN can contain multiple IP subnets. IEEE 802.1ad extends 802.1Q by adding support for multiple, nested VLAN tags (‘QinQ’). Shortest Path Bridging (IEEE 802.1aq) expands the VLAN limit to 16 million.

Cisco Inter-Switch Link (ISL)[edit]

Main article: Cisco Inter-Switch Link

Inter-Switch Link (ISL) is a Cisco proprietary protocol used to interconnect multiple switches and maintain VLAN information as traffic travels between switches on trunk links. This technology provides one method for multiplexing bridge groups (VLANs) over a high-speed backbone. It is defined for Fast Ethernet and Gigabit Ethernet, as is IEEE 802.1Q. ISL has been available on Cisco routers since Cisco IOS Software Release 11.1.

With ISL, an Ethernet frame is encapsulated with a header that transports VLAN IDs between switches and routers. ISL does add overhead to the frame as a 26-byte header containing a 10-bit VLAN ID. In addition, a 4-byte CRC is appended to the end of each frame. This CRC is in addition to any frame checking that the Ethernet frame requires. The fields in an ISL header identify the frame as belonging to a particular VLAN.

A VLAN ID is added only if the frame is forwarded out a port configured as a trunk link. If the frame is to be forwarded out a port configured as an access link, the ISL encapsulation is removed.

Cisco VLAN Trunking Protocol (VTP)[edit]

Main article: VLAN Trunking Protocol

Multiple VLAN Registration Protocol[edit]

Main article: Multiple Registration Protocol

Shortest Path Bridging[edit]

Main article: Shortest Path Bridging

IEEE 802.1aq (Shortest Path Bridging SPB) allows all paths to be active with multiple equal cost paths, provides much larger layer 2 topologies (up to 16 million compared to the 4096 VLANs limit), faster convergence times, and improves the use of the mesh topologies through increased bandwidth and redundancy between all devices by allowing traffic to load share across all paths of a mesh network.

Establishing VLAN memberships[edit]

The two common approaches to assigning VLAN membership are as follows:

  • Static VLANs
  • Dynamic VLANs

Static VLANs are also referred to as port-based VLANs. Static VLAN assignments are created by assigning ports to a VLAN. As a device enters the network, the device automatically assumes the VLAN of the port. If the user changes ports and needs access to the same VLAN, the network administrator must manually make a port-to-VLAN assignment for the new connection.

Dynamic VLANs are created using software or by protocol. With a VLAN Management Policy Server (VMPS), an administrator can assign switch ports to VLANs dynamically based on information such as the source MAC address of the device connected to the port or the username used to log onto that device. As a device enters the network, the switch queries a database for the VLAN membership of the port that device is connected to. Protocol methods include Multiple VLAN Registration Protocol (MVRP) and the somewhat obsolete GARP VLAN Registration Protocol (GVRP).

Protocol-based VLANs[edit]

In a switch that supports protocol-based VLANs, traffic is handled on the basis of its protocol. Essentially, this segregates or forwards traffic from a port depending on the particular protocol of that traffic; traffic of any other protocol is not forwarded on the port.

For example, it is possible to connect the following to a given switch:

If a protocol-based VLAN is created that supports IP and contains all three ports, this prevents IPX traffic from being forwarded to ports 10 and 30, and ARP traffic from being forwarded to ports 20 and 30, while still allowing IP traffic to be forwarded on all three ports.

VLAN Cross Connect[edit]

VLAN Cross Connect (CC) is a mechanism used to create Switched VLANs, VLAN CC uses IEEE 802.1ad frames where the S Tag is used as a Label as in MPLS. IEEE approves the use of such a mechanism in part 6.11 of IEEE 802.1ad-2005.

Question: Differences Between Physical and Virtual LANs


Differences Between Physical and Virtual LANs

It is important to understand that a VLAN does not create new devices or attempt to virtually represent new devices. A lot of attention is currently focused on virtualization and the abstraction of services; however, for the purposes of this discussion, we will ignore those technologies and how they operate.

The purpose of a VLAN is simple: It removes the limitation of physically switched LANs with all devices automatically connected to each other. With a VLAN, it is possible to have hosts that are connected together on the same physical LAN but not allowed to communicate directly. This restriction gives us the ability to organize a network without requiring that the physical LAN mirror the logical connection requirements of any specific organization.

To make this concept a bit clearer, let’s use the analogy of a telephone system. Imagine that a company has 500 employees, each with his or her own telephone and dedicated phone number. If the telephones are connected like a traditional residential phone system, anyone has the ability to call any direct phone number within the company, regardless of whether that employee needs to receive direct business phone calls. This arrangement presents a number of problems, from potential wrong number calls to prank or malicious calls that are intended to reduce the organization’s productivity.

Now suppose a more efficient and secure option is offered, allowing the business to install and configure a separate internal phone system. This phone system forces external calls to go through a separate switchboard or operator—in a more modern phone network, an Integrated Voice Response (IVR) system. This new phone system lets internal users connect directly to each other via extensions (typically using shorter numbers), while it limits what the internal user’s phones can do and where/who the user can call. This internal phone system allows the organization to virtually separate the internal phones. This is essentially what a VLAN does on a network.

To take this analogy into the networking world, consider the network shown in Figure 1.

Figure 1

Figure 1 Basic switched network.

Suppose that hosts A and B are together in one department, and hosts C and D are together in another department. With physical LANs, they could be connected in only two ways: either all of the devices are connected together on the same LAN (hoping that the users of the other department hosts will not attempt to communicate), or each of the department hosts could be connected together on separate physical switches. Neither of these is a good solution. The first option opens up many potential security holes, and the second option would become expensive very quickly.

To solve this sort of problem, the concept of a VLAN was developed. With a VLAN, each port on a switch can be configured into a specific VLAN, and then the switch will only allow devices that are configured into the same VLAN to communicate. Using the network in Figure 1, if A and B were grouped together and separated from the C and D group, you could place A and B into VLAN 10 and C and D into VLAN 20. This way, their traffic would be kept isolated on the switch. In this configuration, the traffic between groups would be prevented at Layer 2 because of the difference in assigned VLANs.

Question: Difference Between VLAN and LAN



VLAN and LAN are two terms used frequently in the networking field. “LAN” is abbreviated as “Local Area Network” is a computer network to which a large number of computers and other peripheral devices are connected within a geographical area. VLAN is an implementation of a private subset of a LAN in which the computers interact with each other as if they are connected to the same broadcast domain irrespective of their physical locations.

The attributes of both LAN and VLAN are the same; however, the end stations are always combined together regardless of the location. The VLAN is used to create multiple broadcast domains in a switch. This can be explained with a simple illustration. Say, for instance, there is one 48-port layer 2 switch. If two separate VLANs are created on ports 1 to 24 and 25 to 48, a single 48-port layer 2 switch can be made to act like two different switches. This is one of the biggest advantages of using VLAN as you don’t have to use two different switches for different networks. Different VLANs can be created for each segment using just one big switch. Suppose in a company users working from different floors of the same building can be connected to the same LAN virtually.

The VLANs can help to minimize traffic when compared to traditional LANs. For instance, if the broadcast traffic is meant for ten users, they can be placed on ten different VLANs which will in turn reduce the traffic. The use of VLANs over traditional LANs can bring down the cost as the VLANs eliminate the need for expensive routers.

In LANs, the routers process the incoming traffic. With the increasing traffic volume, latency gets generated which in turn results in poor performance. With VLANs, the need for routers is reduced as VLANs can create broadcast domains through switches instead of routers.

LANs require physical administration as the location of the user changes, the need for recabling, addressing the new station, reconfiguration of routers and hubs arises. The mobility of the users in a network results in network costs. Whereas if a user is moved within a VLAN, the administrative work can be eliminated as there is no need for router reconfiguration.

Data broadcast on a VLAN is safe when compared to traditional LANs as sensitive data can be accessed only the users who are on a VLAN.


1. VLAN delivers better performance when compared to traditional LANs.

2. VLAN requires less network administration work when compared to LANs.

3. VLAN helps to reduce costs by eliminating the need for expensive routers unlike LANs.

4. Data transmission on VLAN is safe when compared to traditional LANs.

5. VLANs can help reduce traffic as it reduces the latency and creates broadcast domains through switches rather than routers unlike in traditional LANs.

Question: Difference between LAN and VLAN

What is difference between LAN and VLAN. Which one is suited for broadcasting messages How to set up VLAN. What are their advantages and disadvantages


If I write a program for VLAN then will it run if I don’t have a switch. (Each computer connected to one another just using a cable to form a simple LAN)


6down voteaccepted

Lan means “Local Area Network” and Vlan stands for “Virtual LAN”. There are no real differences between one and the other except that a vlan is used to create multiple broadcast domains in a switch. Say for example you have one 48 port layer 2 switch.

If you create 2 vlans, one on ports 1 to 24 and one for ports 25 to 48, you can make one switch act like two.

One advantage of using vlans is that if you segment your network by department like this: One class C network for Sales One class C network for IT etc.

You don’t have to use different switches for different networks because you can just use one big switch and create different Vlans for each segment.

How to create a vlan depends on the switch in question. In a cisco switch you can create vlans like this.

SwitchA(config)#configure terminal   (enter in global configuration mode) 
SwitchA(config)#vlan 3               (defining the vlan 3) 
SwitchA(config)#vlan 3 name management (assigning the name management to vlan 3)
SwitchA(config)#exit        (exit from vlan 3)

Now assigning the ports 2 and 3 to VLAN 3

SwitchA(config)#interface fastethernet 0/2    (select the Ethernet 0 of port 2) 
SwitchA(config-if)#switchport access vlan 3   (allot the membership of vlan 3)
SwitchA(config-if)#exit                       (exit from interface 2)

Question: LAN vs VLAN | Difference between LAN and VLAN

This page compares LAN vs VLAN and describes difference between LAN and VLAN. LAN stands for Local Area Network while VLAN stands for Virtual Local Area Network. The useful links to difference between various terms are provided here.

Physical LAN-Local Area Network

LAN network

LAN is the short form of Local Area Network. The hosts are connected on the same ethernet switch on different ports. The common devices used on LAN are Hubs and Switches.

• The Hub share the data between computers using broadcast address. The host sends the frame to the entire network and to all the ports of the switch. All the hosts ignore the frame except the one for which it is intended as per destination address. This increases traffic on the switch to a great extent.

• The another device called switch share the data between computers using unicast address. Hence two hosts can directly communicate within the same switch. Two hosts which are not within the same switch can go through the routers.

Refer LAN features and comparison with MAN, CAN and WAN➤.

VLAN | Virtual LAN | Virtual Local Area Network

VLAN network

VLAN is the short form of Virtual Local Area Network. It is also known as Virtual LAN. The VLAN is basically configured on ethernet switch. Unlike single LAN on ethernet switch, multiple Virtual LANs are implemented on single switch.

This is done by splitting and assigning number of ports to the different VLANs. Hence broadcast, multicast and other unknown destination traffic originated from one VLAN say VLAN-A gets limited to the members of the same VLAN-A. The traffic do not cross the other VLANs in the switch. This will bring down traffic load on the ethernet switch.

Refer VLAN basics➤ and VLAN tagging➤ for more detailed information including VLAN frame, VLAN tagging and VLAN untagging concepts.

Tabular difference between LAN and VLAN

Following table mentions similarities and difference between LAN and VLAN network types.

Full FormLocal Area NetworkVirtual Local Area Network
DevicesHubs and switches are used in LANSwitches with VLAN tagging capabilities are used.
CoverageHost (i.e. node) to host communication within the buildingHost-to-Host Communication between buildings which are far away beyond LAN limit. This is possible as VLANs can span multiple switches located in different office or building premises.
ProtocolsNormal ethernet frame is used.Uses protocols such as IEEE 802.1Q and VLAN Trunk protocol (VTP). These protocols help traffic to be routed to correct interfaces
Ports to subnet mappingPorts can not be moved between different subnetsPorts can be moved between subnets easily on the same switch. Hence different VLANs on the same switch can have different number of ports.
Number of LAN/VLANs per ethernet switchOne LAN consisting of multiple hosts on one switchMany VLANs can coexist on the same ethernet switch. Each of the VLAN will have different number of ports.
Software configurationNot neededNeed to know commands for tagging in order to configure VLAN
ApplicationTo have sharing of common resources as well as interconnectivity between hostsSame as mentioned in LAN, in addition it extends capabilities of LAN with easy configurability and less burden on the ethernet switch.

Question: VLAN Overview

A virtual LAN, or VLAN, is a group of computers, network printers, network servers, and other network devices that behave as if they were connected to a single network.

In its basic form, a VLAN is a broadcast domain. The difference between a traditional broadcast domain and one defined by a VLAN is that a broadcast domain is seen as a distinct physical entity with a router on its boundary. VLANs are similar to broadcast domains because their boundaries are also defined by a router. However, a VLAN is a logical topology, meaning that the VLAN hosts are not grouped within the physical confines of a traditional broadcast domain, such as an Ethernet LAN.

If a network is created using hubs, a single large broadcast domain results, as illustrated in Figure 8-2.

Figure 8-2. Two Broadcast Domains Connected Across a WAN

[View full size image]


Because all devices within the broadcast domain see traffic from all other devices within the domain, the network can become congested. Broadcasts are stopped only at the router, at the edge of the broadcast domain, before traffic is sent across the wide-area network (WAN) cloud.

If the network hubs are replaced with switches, you can create VLANs within the existing physical network, as illustrated in Figure 8-3.

Figure 8-3. Two VLANs Connected Across a WAN

[View full size image]


When a VLAN is implemented, its logical topology is independent of the physical topology, such as the LAN wiring. Each host on the LAN can be assigned a VLAN identification number (ID), and hosts with the same VLAN ID behave and work as though they are on the same physical network. This means the VLAN traffic is isolated from other traffic, and therefore all communications remain within the VLAN. The VLAN ID assignment made by the switches can be managed remotely with the right network management software.

Depending on the type of switching technology used, VLAN switches can function in different ways; VLANs can be switched at the data link (Open System Interconnection [OSI] model Layer 2) or the network layer (OSI model Layer 3). The main advantage of using a VLAN is that users can be grouped together according to their network communications requirements, regardless of their physical locations, although some limitations apply to the number of nodes per VLAN (500 nodes). This segmentation and isolation of network traffic helps reduce unnecessary traffic, resulting in better network performance because the network is not flooded. Don’t take this advantage lightly, because VLAN configuration takes considerable planning and work to implement; however, almost any network manager will tell you it is worth the time and energy.


An end node can be assigned to a VLAN by inspecting its Layer 3 address, but a broadcast domain is a Layer 2 function. If a VLAN is switched based on Layer 3 addressing, it is in essence routed. There are two basic differences between routing and switching: First, the decision of forwarding is performed by the application-specific integrated circuit (ASIC) at the port level for switching versus the reduced instruction set circuit (RISC) or main processor for routing; second, the information used to make the decision is located at a different part of the data transfer (packet versus frame).

Question: What is the major difference between LAN and VLAN ?


Local Area Network is a computer network to which a large number of computers and other peripheral devices are connected within a geographical area. VLAN is an implementation of a private subset of a LAN in which the computers interact with each other as if they are connected to the same broadcast domain irrespective of their physical locations.It Delivers Better performance,less Network administeation work, eleminating the need of expansive Routers, more Security Then LAN.


1. VLAN delivers better performance when compared to traditional LANs.

2. VLAN requires less network administration work when compared to LANs.

3. VLAN helps to reduce costs by eliminating the need for expensive routers unlike LANs.

4. Data transmission on VLAN is safe when compared to traditional LANs.


Lan means “Local Area Network” and Vlan stands for “Virtual LAN”.

Local Area Network is a computer network to which a large number of computers and other peripheral devices are connected within a geographical

area. VLAN is an implementation of a private subset of a LAN in which the computers interact with each other as if they are connected to the same broadcast domain irrespective of their physical locations.It Delivers Better performance,less Network administeation work, eleminating the need of expansive Routers, more Security Then LAN.

1. VLAN delivers better performance when compared to traditional LANs.

2. VLAN requires less network administration work when compared to LANs.

3. VLAN helps to reduce costs by eliminating the need for expensive routers unlike LANs.

4. Data transmission on VLAN is safe when compared to traditional LANs.


LAN local area network consists within a building connected with network devices like switches, routers etc. and VLAN VIRTUAL LOCAL AREA network is a concept  of virualy logical domain’s connectivity and communication. VLAN’s are created in a SWITCH to seperated the goups and join the same domain like, sale department , purchase department etc etc to communicate each other. For example there is VLAN Named sale department , in this case any computer we join to sale department can only communicate each other within sale department vlan. secure , fast and reduced the burdon of more switches purchasing.


In a LAN Environment VLANs are used to separate Broadcast domains logically. VLAN delivers better performance, requires less network administration and helps to reduce Broadcast traffic.


Lan means “Local Area Network” and Vlan stands for “Virtual LAN”. There are no real differences between one and the other except that a vlan is used to create multiple broadcast domains in a switch. Say for example you have one 48 port layer 2 switch.


LAN and VLAN are two terms used frequently in the networking field. “LAN” is abbreviated as “Local Area Network” is a computer network to which a large number of computers and other peripheral devices are connected within a geographical area. VLAN is an implementation of a private subset of a LAN in which the computers interact with each other as if they are connected to the same

broadcast domain irrespective of their physical locations.

The VLAN is used to create multiple broadcast domains in a switch.

Question: WAN, MAN, LAN, WLAN, VLAN and PAN what are these ?

Wide Area Network, WAN is a collection of computers and network resources connected via a network over a geographic area. Wide-Area Networks are commonly connected either through the Internet or special arrangements made with phone companies or other service providers.

Local-Area Network, LAN has networking equipment or computers in close proximity to each other, capable of communicating, sharing resources and information. For example, most home and business networks are on a LAN.

Metropolitan-Area Network, MAN is a network that is utilized across multiple buildings. A MAN is much larger than the standard Local-Area Network (LAN) but is not as large as a Wide Area Network (WAN) and commonly is used in school campuses and large companies with multiple buildings.

Personal Area Network, PAN, is a local network designed to transmit data between personal computing devices (PCs), personal digital assistants (PDAs) and telephones. Gaming devices, like a game console system, may also be set up on a PAN.

Virtual Local Area Network, VLAN is a virtual LAN that allows a network administrator to setup separate networks by configuring a network device, such as a router, and not through cabling. This allows for a network to be divided, setup, and changed, which allows a network administrator to organize and filter data accordingly in a corporate network.

Wireless Local Area Network, WLAN is a type local network that utilizes radio waves, rather than wires, to transmit data. Today’s computers have WLAN built onboard, which means no additional WiFi card needs to be installed.


WAN -wide are network. this network connection between telco company with media divices  and router. this network connection country to country, earth to the moon,moon to the sea, this network there have multiple routing protocol.wan if you interested to know this, you must be to attend ccna training.

MAN – metropolitan area network, this network is limited implemention, only inside the city between telco company but the same connection, divices and routing protocol in wan network. in other in i.t person the long range wireles network is consider this a MAN network.

LAN- local area network, this network implement connection from router to switch and into computer inside your company or in your home. there have multipple configuration in lan network,v-lan,rstp, etc. if deffends the project.

WLAN- wireless local area network. this network now is built in your laptop.there have many wlan divices can insert in your usb port. but you should install the driver software if your operating systen  did not recognized. wlan can connect to wireless router with or w/o internet but should you know the ssid and encryption and security key. 

PAN- personal area network- this is1st invented small RF signal in the laptop or computer. this call bluetooth device. there have security key to connect other blutooth device such a mobile phone to transfer file


WAN- Wide Area Network (connect multiple smaller networks, such as local area networks (LANs) or metro area networks (MANs)

MAN- Metropolitan Area Network(a network spanning a physical area larger than a LAN but smaller than a WAN, such as a city)

LAN-Local Area Network (connects network devices over a relatively short distance)

WLAN-Wireless Local Area Network  LAN based on WiFi wireless network technology)

VLAN-Virtual Local Area Network (local area network with a definition that maps workstations on some other basis than geographic location)

PAN-Personal Area Network ( networks typically involve a mobile computer, a cell phone and/or a handheld computing device such as a PDA)


WAN: Wide Area Networks cover a broad area, like communication links that cross metropolitan, regional

MAN: Metropolitan Area Networks are very large networks that cover an entire city.

LAN:     Local Area Networks cover a small physical area, like a home, office.

WLAN: Wireless Local Area Networks enable users to move around within a larger coverage area

VLAN: A virtual local area network  is a logical group of workstations, servers and network devices that appear to be on the same LAN despite their geographical distribution

PAN: Personal Area Networks are used for communication among various devices, such as telephones, personal digital assistants, fax machines, and printers


These all are the Networks.

Personal area network, or PAN

Local area network, or LAN

Metropolitan area network, or MAN

Wide area network, or WAN

Storage area network, or SAN

Enterprise private network, or EPN

Virtual private network, or VPN

Most popular network types are LAN and WAN.

One broadcast domain is called LAN.

A network implemented in large numbers of devices over the Internet is called WAN.


These are the Network types everyone have different work and different structure. Most popular network types are LAN and WAN.

one broadcast domain is called LAN.

A network implemented in large numbers of devices over the Internet is called WAN

Question: LAN vs WAN vs MAN vs VLAN vs VPN


Today we will introduce the difference between LAN, WAN, MAN, VLAN and VPN. If you are interested in these knowledge, let’s learn about it!

Following table compares LAN, MAN and WAN with respect to various networking parameters.

Full FormLocal Area NetworkMetropolitan Area NetworkWide Area Network
What is it?• Systems are close to each other in LAN 
• contained in one office or building
• one organization can have several LANs
• Large network which connects different organizations• Two or more LANs connected 
• Located over large geographical area 
Limited coverage, about upto 2 miles(or 2500 meters)Limited coverage, about upto 100 miles(or 200 km)Unlimited (usually in 1000Km) range, uses repeater and other connectivity for range extension
Speed of 
High, typically 10, 100 and 1000 MbpsHigh, typically 100 MbpsSlow, about 1.5 Mbps (May vary based on wireless technologies used)
used for medium
Locally installed, twisted pair, fiber optic cable, wireless (e.g. WLAN, Zigbee)Locally installed and based on common carrier e.g. twisted pair, fiber optic cable etc.Locally installed and based on common carrier e.g. twisted pair wires, fiber, coaxial cable, wireless including wireless and cellular network based
ApplicationsUsed mainly by fixed desktop computers and portable computers (e.g. laptops) . Now-a-days it is used by smart phones due to emergence of WLAN networkUsed mainly by desktop and mini computers.Can be used by any devices, but desktop devices are mainly using this network type.

What is the difference between VLAN and VPN?

※ VLAN stands for Virtual Local Area Network. It is a set of hosts that communicate with each other as if they were connected to the same switch (as if they were in the same domain), even if they are not.

※ VPN stands for Virtual Private Network. It provides a secure method for connecting to a private network through a public network that is not secure, such as the internet from a remote location.

※ VPN allows creating a smaller sub network using the hosts in an underlying larger network and a VLAN can be seen as a sub group of VPN. The main purpose of VPN is to provide a secure method for connecting in a private network, from remote locations.

Question: VLAN Implementation Guide: The Basics


Virtual LANs are core to enterprise networking. This guide covers VLAN trunks, VLAN planning, and basic VLAN configuration.

If you’re just getting started in the world of network administration and architecture, there’s no better place to begin than with a solid understanding of virtual LANs (VLANs.)

In order to understand the purpose of VLANs, it’s best to look at how Ethernet networks previously functioned. Prior to VLANs and VLAN-aware switches, Ethernet networks were connected using Ethernet hubs. A hub was nothing more than a multi-port repeater. When an end device sent information onto the Ethernet network toward a destination device, the hub retransmitted that information out all other ports as a network-wide broadcast.

The destination device would receive the information sent, but so would all other devices on the network. Those devices would simply ignore what the heard. And while this method worked in small environments, the architecture suffered greatly from scalability issues. Too much time was spent discarding received messages and waiting for a turn to transmit their own messages that Ethernet networks using hubs became congested.

A layer 2 aware switch solves this problem using two different methods. First, the switch has the ability to learn and keep track of devices by their MAC address. By maintaining a dynamic table of MAC address to switch port number, the switch has the ability to send messages directly from a source device to the destination device in a unicast transmission as opposed to a broadcast transmission that is sent to all devices. This is known as the switch forwarding table.

While the forwarding table does a great deal to limit broadcast messages, and thus reduce the amount of broadcast overhead, it does not completely eliminate it. Broadcast messages are still required in many situations. And as such, the more devices on a physical network, the more broadcast messages are going to be clogging up the network.

That leads us to our second method that layer 2 switches use to streamline Ethernet communication. Instead of having one large layer 2 network, VLANs are used to segment a switch — or network of switches — into multiple, logical layer 2 networks. Broadcast messages sent and received are contained within each smaller VLAN. Thus, if you have a network of 1,000 end devices and create 4 VLANs of 250 devices each, each logical network must only have to deal with 250 devices of broadcast overhead, as opposed to all 1,000 if they were on the same layer 2 network.

VLAN trunks

Now that you have an understanding of the purpose of VLANs, the next skill to acquire is the understanding of VLAN trunks. Large networks often contain more than one switch. And if you want to span virtual LANs across two or more switches, a VLAN trunk can be used. VLAN information is local to each switch database, so the only way to pass VLAN information between switches is to use a trunk.

A VLAN trunk can be configured to pass VLAN data for one or all VLANs configured on a switch. The trunk keeps track of which VLAN that the data belongs to by adding a VLAN tag to each Ethernet frame that is passed between switches. Once the receiving switch receives the frame, it strips the VLAN tag off and places the frame onto the proper local VLAN.

Inter-VLAN routing

The last basic skill regarding VLANs on enterprise networks is the concept of inter-VLAN routing. While devices on the same VLAN can communicate with other devices in the same VLAN, the same cannot be done when the devices belong to different VLANs. This is where inter-VLAN routing is necessary.

As we have learned, a VLAN breaks up a physical layer 2 network into multiple, logical layer 2 networks. In order to move between these layer 2 networks, this traffic needs to be routed at layer 3. So while switches can send data from source devices to destination devices using layer 2 MAC addresses, inter-VLAN routing using IP addressing. This can be either IP version 4 or IPv6, although most enterprise networks still use IPv4 on internal networks.

On enterprise networks that are well planned, each VLAN configured is its own unique IPv4 subnet. For example, devices on VLAN 10 will be configured to use IPv4 addresses in the 10.10.10.X IP space while devices on VLAN 99 will be configured to use IPv4 addresses in the 10.10.99.x space. In addition to each device having its own IP address and subnet mask, a default gateway IP addresses is required. Every device in VLAN 10 will be configured to use the same default gateway IP address such as and every device configured for VLAN 99 will use the gateway of The default gateway IP address is a router interface (either physical or virtual) that is responsible for routing traffic to other IP networks.

So if a device in VLAN 10 needs to communicate with a device in VLAN 99, the VLAN 10 device will forward the data to its default gateway. Layer 3 routing will occur and forward the data to the default gateway of VLAN 99. Once on the correct destination VLAN, the data is then forwarded at layer 2 to the destination endpoint.

Planning a VLAN strategy

Depending on the size of the network, planning a VLAN strategy can be either fairly easy, or somewhat complex. Remember, because each VLAN is also its own sub-network, we have to come up with a VLAN strategy where it makes the most sense in terms of grouping devices. In todays modern networks with virtualized layer 2 and layer 3 networks, the number of VLANs and layer 3 interfaces that can be configured on enterprise hardware is in the multiple thousands. Additionally, since inter-VLAN routing can now be performed at wire speed, there is no noticeable difference between sending/receiving traffic from devices on the same VLAN vs. different VLANs.

That being said, due to broadcast overhead, its typically advisable that a single VLAN not have any more than 500 or so devices. Any more than this and you begin to start having network congestion problems due to a significant increase in broadcast traffic on the layer 2 segment. Most network designs call for subnet sizes that have no more than 250 devices.

In terms of how to segment devices onto different VLANs, security is the primary factor today. From a security standpoint, its best to place similar devices onto the same subnets. For example, put all employee computers on VLAN 10, printers on VLAN 20, servers on VLAN 50 and IP phones on VLAN 100. By doing this, you can easily apply layer 3 filters or firewall rules that target specific devices in how traffic in and out of that VLAN is treated.

Configuring a VLAN and adding a switch port

Lets now move onto how to configure VLAN basics using a Cisco switch. In this example, we will configure VLAN 80 as our server VLAN. We will then configure switch port 10 to use this new VLAN. Keep in mind that out of the box, only VLAN 1 is configured on the switch and all switch ports are configured to use this VLAN.

Configuring a VLAN trunk

In this next example, lets assume that we have two switches that are connected by a single Ethernet interface: port 20 on both switches. Each switch has been configured with VLAN 1, 2 and 3. The goal is to trunk only these three VLANs of the two switches together. To accomplish this, configure the following on both switches (see above).

Configuring a SVI for inter-VLAN routing

A switched virtual interface (SVI) is the name of a virtual router interface on a layer 3 switch. The virtual interface is the VLAN’s default gateway used for routing traffic between networks. In this example, we will configure a SVI for VLAN 10 and VLAN 20. VLAN 10 will use the IPv4 subnetwork of 10.10.10.X/24 with a default gateway of VLAN 20 will use a subnetwork of 10.10.20.X/24 with a default gateway of Once complete, the switch will then be able to route traffic between the two VLANs via layer 3 routing.

Advanced VLAN topics to research

If youre looking to learn some more advanced skills related to VLANs, I recommend researching the following topics:

Spanning Tree Protocol (STP)

VLAN Trunking Protocol (VTP)

Private VLANs

Dynamic VLANs

VLAN security weaknesses

Question: Vlan


  1. 2. LAN <ul><li>A Local Area Network (LAN) was originally defined as a network of computers located within the same area </li></ul><ul><li>Local Area Networks are defined as a single broadcast domain. This means that if a user broadcasts information on his/her LAN, the broadcast will be received by every other user on the LAN. </li></ul><ul><li>Broadcasts are prevented from leaving a LAN by using a router. The disadvantage of this method is routers usually take more time to process incoming data compared to a bridge or a switch </li></ul>
  2. 3. VLAN <ul><li>A VLAN is a logical group of network devices that appears to be on the same LAN </li></ul><ul><li>Configured as if they are attached to the same physical connection even if they are located on a number of different LAN segments. </li></ul><ul><li>Logically segment LAN into different broadcast domains. </li></ul>
  3. 4. VLAN <ul><li>VLANs can logically segment users into different subnets (broadcast domains) </li></ul><ul><li>Broadcast frames are only switched on the same VLAN ID. </li></ul><ul><li>This is a logical segmentation and not a physical one, workstations do not have to be physically located together. Users on different floors of the same building, or even in different buildings can now belong to the same LAN. </li></ul>
  4. 5. LAN VS VLAN <ul><li>By using switches, we </li></ul><ul><li>can assign computer </li></ul><ul><li>on different floors to </li></ul><ul><li>VLAN1, VLAN2, and </li></ul><ul><li>VLAN3 </li></ul><ul><li>Now, logically, a </li></ul><ul><li>department is spread </li></ul><ul><li>across 3 floors even </li></ul><ul><li>though they are </li></ul><ul><li>physically located on </li></ul><ul><li>different floors </li></ul>
  5. 7. VLAN Configurations
  6. 8. STATIC VLANS <ul><li>Static membership VLANs are called port-based and port-centric membership VLANs. </li></ul><ul><li>This is the most common method of assigning ports to VLANs. </li></ul><ul><li>As a device enters the network, it automatically assumes the VLAN membership of the port to which it is attached. </li></ul><ul><li>There is a default VLAN , on Cisco switches that is VLAN 1. </li></ul>Default VLAN 1 Default VLAN 1 ConfiguredVlan 10
  7. 9. DYNAMIC VLANS <ul><li>Dynamic membership VLANs are created through network management software </li></ul><ul><li>Dynamic VLANs allow for membership based on the MAC address of the device connected to the switch port. </li></ul><ul><li>As a device enters the network, it queries a database within the switch for a VLAN membership </li></ul>
  8. 10. CONFIGURING PORTS <ul><li>Access ports are used when: </li></ul><ul><ul><li>Only a single device is connected to the port </li></ul></ul><ul><ul><li>Multiple devices (hub) are connected to the port, all belonging to the same VLAN </li></ul></ul><ul><ul><li>Another switch is connected to this interface, but this link is only carrying a single VLAN (non-trunk link). </li></ul></ul><ul><li>Trunk ports are used when: </li></ul><ul><ul><li>Another switch is connected to this interface, and this link is carrying multiple VLANs(trunk link). </li></ul></ul>
  9. 11. <ul><li>Switch(config-if)switchport mode [access|trunk] </li></ul><ul><li>An access port means that the port (interface) can only belong to a single VLAN. </li></ul>
  10. 12. Switch(config-if)switchport mode access Switch(config-if)switchport mode trunk ACCESS PORTS TRUNK PORT
  11. 13. VLAN TRUNKING <ul><li>In a switched network, a trunk is a point-to-point link that supports several VLANs. </li></ul><ul><li>The purpose of a trunk is to conserve ports when a link between two devices that implement VLANs is created . </li></ul>
  12. 14. VLAN TECHNIQUES <ul><li>Two techniques </li></ul><ul><ul><li>Frame Filtering –examines particular information about each frame (MAC address or layer 3 protocol type) </li></ul></ul><ul><ul><li>Frame Tagging –places a unique identifier in the header of each frame as it is forwarded throughout the network backbone. </li></ul></ul>
  13. 15. FRAME FILTERING <ul><li>Users can be logically group via software based on: </li></ul><ul><ul><li>port number </li></ul></ul><ul><ul><li>MAC address </li></ul></ul><ul><ul><li>Ip subnet </li></ul></ul><ul><ul><li>protocol being used </li></ul></ul>
  14. 17. <ul><li>Membership by Port </li></ul><ul><li>Membership by MAC Address </li></ul><ul><li>Membership by IP Subnet Address </li></ul>port vlan 1 1 2 1 3 2 4 1 disadvantage of this method is that it does not allow for user mobility.
  15. 18. <ul><li>Membership by Port </li></ul><ul><li>Membership by MAC Address </li></ul><ul><li>Membership by IP Subnet Address </li></ul><ul><li>Advantage : </li></ul><ul><li>no reconfiguration needed </li></ul><ul><li>Disadvantage : </li></ul><ul><li>VLAN membership must be assigned initially. </li></ul><ul><li>performance degradation as members of different VLANs coexist on a single switch port </li></ul>MAC Address vlan 1212354145121 1 2389234873743 1 3045834758445 2 5483573475843 1
  16. 19. <ul><li>Membership by Port </li></ul><ul><li>Membership by MAC Address </li></ul><ul><li>Membership by IP Subnet Address </li></ul><ul><li>Advantage: </li></ul><ul><li>Good for application-based VLAN strategy </li></ul><ul><li>User can move workstations </li></ul><ul><li>eliminate the need for frame tagging </li></ul>IP Subnet vlan 23.2.24 1 26.21.35 2
  17. 20. VLAN TAGGING <ul><li>VLAN frame tagging was specifically developed for switched communications. </li></ul><ul><li>Frame tagging places a unique identifier in the header of each frame as it is forwarded throughout the network backbone. </li></ul><ul><li>The identifier is understood and examined by each switch before any broadcasts or transmissions are made to other switches, routers, or end stations. </li></ul><ul><li>When the frame exits the network backbone, the switch removes the identifier before the frame is transmitted to the target end station. </li></ul>
  18. 21. <ul><li>The two most common tagging schemes for Ethernet segments are </li></ul><ul><ul><li>ISL (Inter-Switch Link) </li></ul></ul><ul><ul><li>802.1Q – An IEEE standard </li></ul></ul>
  19. 22. ISL (Frame Encapsulation) <ul><li>An Ethernet frame is encapsulated with a header that transports VLAN IDs. </li></ul><ul><li>The ISL encapsulation is added by the switch before sending across the trunk. </li></ul>
  20. 23. <ul><li>The switch removes the ISL encapsulation before sending it out a non trunk link. </li></ul><ul><li>It adds overhead to the frame as a 26-byte header containing a 10-bit VLAN ID . </li></ul><ul><li>In addition, a 4-byte cyclic redundancy check (CRC) is appended to the end of each frame. </li></ul><ul><ul><ul><li>This CRC is in addition to any frame checking that the Ethernet frame requires. </li></ul></ul></ul>
  21. 24. IEEE 802.1Q <ul><li>Significantly less overhead than the ISL. </li></ul><ul><li>802.1Q inserts only an additional 4 bytes into the Ethernet frame. </li></ul><ul><li>The 802.1Q tag is inserted by the switch before sending across the trunk. </li></ul><ul><li>The switch removes the 802.1Q tag before sending it out a non trunk link. </li></ul>
  22. 27. <ul><li>Trunking protocols were developed to effectively manage the transfer of frames from different VLANs on a single physical link. </li></ul><ul><li>The trunking protocols establish agreement for the distribution of frames to the associated ports at both ends of the trunk. </li></ul><ul><li>VLAN tagging information is added by the switch before it is sent across the trunk and removed by the switch before it is sent down a non-trunk link </li></ul>
  24. 29. SwitchA(config-if) switchport mode trunk SwitchB(config-if)switchport mode trunk encapsulation dot1q SwitchB(config-if)switchport mode trunk <ul><li>If SwitchA can only be a 802.1.Q trunk and SwitchB can be either ISL or 802.1Q trunk, configure SwitchB to be 802.1Q. </li></ul><ul><li>On switches that support both 802.1Q and ISL, the switchport trunk encapsulation command must be done BEFORE the switchport mode trunk command. </li></ul>
  25. 30. VLAN Configuration <ul><li>Configuring VLANs under Linux is a process similar to configuring regular Ethernet interfaces. The main difference is you first must attach each VLAN to a physical device. This is accomplished with the vconfig utility. If the trunk device itself is configured, it is treated as native. For example, these commands define VLANs 2-4 on device eth0: </li></ul><ul><li>vconfig add eth0 2 </li></ul><ul><li>vconfig add eth0 3 </li></ul><ul><li>vconfig add eth0 4 </li></ul>
  26. 31. Switch Configuration <ul><li>Before you begin configuration, make sure the IP address of the switch falls within the new management subnet. The IP configuration is associated with a virtual interface. This is normally VLAN1. </li></ul><ul><li>interface VLAN1 ip address </li></ul>
  27. 32. Enabling the Trunk <ul><li>interface FastEthernet 0/1 </li></ul><ul><li>switchport trunk encapsulation dot1q </li></ul><ul><li>switchport mode trunk </li></ul>
  28. 33. Moving the Ports <ul><li>interface FastEthernet0/2 switchport access vlan 2 </li></ul><ul><li>interface FastEthernet0/3 switchport access vlan 2 </li></ul><ul><li>interface FastEthernet0/4 switchport access vlan 3 </li></ul><ul><li>interface FastEthernet0/5 switchport access vlan 3 </li></ul><ul><li>Once your changes are complete, you can see which ports are in which VLAN by using the show vlan command. </li></ul>
  29. 34. BENEFITS OF VLAN <ul><li>Performance </li></ul><ul><li>Formation of Virtual Workgroups </li></ul><ul><li>Simplified Administration </li></ul><ul><li>Reduced Cost </li></ul><ul><li>Security </li></ul>
  30. 35. REFERENCES <ul><li>David Passmore, John Freeman, “The Virtual LAN Technology Report,’‘ </li></ul><ul><li>Paul Frieden,” VLANS on LINUX “ </li></ul><ul><li>cisco </li></ul>
  31. 41. <ul><li>TPID – defined value of 8100 in hex. When a frame has the EtherType equal to 8100, this frame carries the tag IEEE 802.1Q / 802.1P. </li></ul><ul><li>TCI – Tag Control Information field including user priority, Canonical format indicator and VLAN ID. </li></ul><ul><li>User Priority – Defines user priority, giving eight (2^3) priority levels. IEEE 802.1P defines the operation for these 3 user priority bits. </li></ul><ul><li>CFI – Canonical Format Indicator is always set to zero for Ethernet switches. CFI is used for compatibility reason between Ethernet type network and Token Ring type network. If a frame received at an Ethernet port has a CFI set to 1, then that frame should not be forwarded as it is to an untagged port. </li></ul><ul><li>VID – VLAN ID is the identification of the VLAN, which is basically used by the standard 802.1Q. It has 12 bits and allow the identification of 4096 (2^12) VLANs. Of the 4096 possible VIDs, a VID of 0 is used to identify priority frames and value 4095 (FFF) is reserved, so the maximum possible VLAN configurations are 4,094. </li></ul>
  32. 42. Switch Model Number of Supported VLANs Catalyst 2950-12 64 Catalyst 2950-24 64 Catalyst 2950C-24 250 Catalyst 2950G-12-EI 250 Catalyst 2950G-24-EI 250 Catalyst 2950G-48-EI 250 Catalyst 2950G-24-EI-DC 250 Catalyst 2950T-24 250

Question: What is a virtual LAN (VLAN) and how does it work with my managed switch?


A VLAN is a set of end stations and the switch ports that connect them. You can have different reasons for the logical division, such as department or project membership. The only physical requirement is that the end station and the port to which it is connected both belong to the same VLAN.

Adding virtual LAN (VLAN) support to a Layer 2 switch offers some of the benefits of both bridging and routing. Like a bridge, a VLAN switch forwards traffic based on the Layer 2 header, which is fast. Like a router, it partitions the network into logical segments, which provides better administration, security, and management of multicast traffic.

Each VLAN in a network has an associated VLAN ID, which appears in the IEEE 802.1Q tag in the Layer 2 header of packets transmitted on a VLAN. An end station might omit the tag, or the VLAN portion of the tag, in which case the first switch port to receive the packet can either reject it or insert a tag using its default VLAN ID. A given port can handle traffic for more than one VLAN, but it can support only one default VLAN ID.

The Private Edge VLAN feature lets you set protection between ports located on the switch. This means that a protected port cannot forward traffic to another protected port on the same switch. The feature does not provide protection between ports located on different switches.

The diagram in this article shows a switch with four ports configured to handle the traffic for two VLANs. Port 1/0/2 handles traffic for both VLANs, while port 1/0/1 is a member of VLAN 2 only, and ports 1/0/3 and 1/0/4 are members of VLAN 3 only. The script following the diagram shows the commands you would use to configure the switch as shown in the diagram.


Question: VLANs (Virtual LANs)


What is a VLAN?

In the simplest of LAN topologies, you have a single physical network and everything on that LAN can communicate with any other device. In an IP network, on a simple private LAN you have a single IP subnet (e.g. In this simple network, all devices are all part of the same physical LAN (‘wiring’) and logical LAN (IP network).

A Virtual LAN (‘VLAN’) is a method of segmenting different devices according to their location, function or security clearance.

For example, you may wish to separate departments (sales, accounts, R&D) or separate company traffic/data from guests using WiFi in your premises. The rules set for VLANs can set whether each VLAN can or cannot communicate with any other. A VLAN can also provide additional security by ensuring that physical networks only carry necessary data, perhaps omitting more sensitive data. A VLAN can be physically separated or separated by differential labelling of datagrams.

VLANs vs. Subnets

It’s important to remember that a VLAN is not the same as a different subnet (e.g. vs. Subnets provide IP addressing space, or logical departmental or network numbering but do not separate the networks or provide any security. If you just have multiple subnets, any device could have more than one IP address or connect to either subnet as both are available on the same physical network. VLANs and subnets can be used together – each subnet can be within a different VLAN. This is a common application as it makes it easier to keep track of your VLANs.

Types of VLAN

There are two main types of VLAN; port based or tag based. They can be used in combination with each other. VLANs can increase both network efficiency and security.

Port Based VLANs

A port based VLAN is one where the physical ports of an Ethernet switch (such as the one built into your router) are separated so that traffic does not pass between chosen ports. You can choose which ports can and can’t communicate with each other.

For example, if you have one PC plugged directly into each port on your router. All PCs have access to the Internet. You set two VLANS (VLAN0 and VLAN1). The PCs on ports 1,2 & 3 are in VLAN0 and can communicate with each other but not the PCs/devices on the other ports. Ports 5 & 6 are in the other VLAN and cannot communicate with ports 1,2 & 3. Port 4 is set to be in both VLANs so the PC on that port can communicate with all other devices. That is a port based VLAN – the physical port is isolated or common to a group:

In the example below, within the setup of the router, we have set up two VLANs that are each a member of the Subnet LAN1, operating in the same IP range but separated. VLAN0 has Ethernet ports 1-4 in it, and VLAN1 contains Ports 4-6. See how Port 4 is in both VLANs, so the device (PC) connected to port 4 will be able to communicate with all devices in VLAN0 and VLAN1 but all other devices will be restricted to devices within their own VLAN:

If a port is common to more than one VLAN, your router will allow that port to communicate with the ports in each VLAN that it is a member of.

The VLANs are not able to communicate directly but the device connected to that port, such as a printer, would be accessible by each of the VLANs.

A port doesn’t have to connect to a PC directly, it can feed a secondary Ethernet switch; in that case, the switch will inherit the VLAN characteristics and receive only data which is part of that port’s VLAN.

Tag Based VLANs

A Tag-based VLAN is one where an identifier label (or ‘tag’) is added to the Ethernet frame to identify it as belonging to a specific VLAN group. This has the advantage over port based VLAN in that multiple tagged VLANs can be sent over the same physical network/cable and split only once required; making it inherently scalable. The most common protocol for defining VLAN tags is 802.1q. Remember that VLAN tags exist at Layer 2 – not the IP layer so even if you have multiple IP subnets, they can all belong to the same VLAN structures.

In the diagram below, we have 3 VLANs (IDs 10, 11 and 12), all of which are available on port 2 of the router. The router connects to a larger switch which in turn splits the VLANs up so that each goes only to specific onward ports on the switch:

The most common distinction between tagged-VLAN data is to separate IP subnets, but they can also be used departmentally or for specific devices or services. Tagged based VLANs provide much more scalability than port-based VLANs. Whether they provide any additional security will depend entirely on your topology.

To make use of tagged VLANs, all networking components must recognise and support VLAN tags. The device, for example, might be a secondary Ethernet switch with 24 ports and is set to split one VLAN to be distributed onto ports 1-12 and another VLAN onto ports 13-24. The device may instead be a wireless access point which supports multiple SSIDs. It takes data with one VLAN tag to serve SSID1, and another VLAN to serve SSID2. That way, the wireless access point is fed by only one Ethernet cable but can serve two completely separated wireless networks.

In the example, we have three VLANs set up and we have given each a unique VLAN tag; that can be anything you like but in our case we have chosen 10, 11 and 12 for VLANs 1,2 and 3 respectively. Vigor 2860 Port 2 is included in VLAN 1,2 and 3 and this means that it is able to send and receive traffic for these VLANs . A switch such as the P2261 would then be connected to Vigor 2860 Port 2 and the corresponding port on the switch would also be configured to the same VLAN tags. Other ports on the P2261 switch can be configured to a VLAN tag to allow a device connected to the port to communicate with the VLAN matching the tag.

In our example P2261:

  • Ports 3, 4, 5, 6 have a tag of 10 so would be able to communicate with VLAN1.
  • Ports 7, 8, 9, 10 have a tag of 11 so would be in VLAN2 and port 11 and 12 have a tag of 12 to associate them with VLAN3.

The “Permit untagged device in P1 to access router” box is ticked which means that a PC can also be directly connected to the Vigor 2860 port 1 without needing to be configured to be vlan aware and still communicate with the router. Devices connected directly to ports P3,P4,P5,P6 would need to be vlan aware.

Combining tags, ports and Wireless SSIDs

DrayTek routers allow you to combine port-based VLANs, tagged VLANs, physical Ethernet ports and wireless SSIDS (for wireless equipped routers), allowing much flexibility. The actual VLAN setup page therefore looks like this:

Devices which do not support tags

Not all networking equipment supports tagged VLANs, so to accommodate those, you can have tagged data and untagged data running on the same network, perhaps physically isolated by port-based VLANs, or your switch can remove the VLAN tag before forwarding the data onto the connected device. A feature of most tag-capable Ethernet switches is that they can add, remove, change or forward VLAN tags.

Note : The capability of any particular product will vary; please refer to specifications of each product for feature support.

Question: VLAN

Stands for “Virtual Local Area Network,” or “Virtual LAN.” A VLAN is a custom networkcreated from one or more existing LANs. It enables groups of devices from multiple networks (both wired and wireless) to be combined into a single logical network. The result is a virtual LAN that can be administered like a physical local area network.

In order to create a virtual LAN, the network equipment, such as routers and switchesmust support VLAN configuration. The hardware is typically configured using a software admin tool that allows the network administrator to customize the virtual network. The admin software can be used to assign individual ports or groups of ports on a switch to a specific VLAN. For example, ports 1-12 on switch #1 and ports 13-24 on switch #2 could be assigned to the same VLAN.

Say a company has three divisions within a single building — finance, marketing, and development. Even if these groups are spread across several locations, VLANs can be configured for each one. For instance, each member of the finance team could be assigned to the “finance” network, which would not be accessible by the marketing or development teams. This type of configuration limits unnecessary access to confidential information and provides added security within a local area network.

VLAN Protocols

Since traffic from multiple VLANs may travel over the same physical network, the data must be mapped to a specific network. This is done using a VLAN protocol, such as IEEE 802.1Q, Cisco’s ISL, or 3Com’s VLT. Most modern VLANs use the IEEE 802.1Q protocol, which inserts an additional header or “tag” into each Ethernet frame. This tag identifies the VLAN to which the sending device belongs, preventing data from being routed to systems outside the virtual network. Data is sent between switches using a physical link called a “trunk” that connects the switches together. Trunking must be enabled in order for one switch to pass VLAN information to another.

4,904 VLANs can be created within an Ethernet network using the 802.1Q protocol, but in most network configurations only a few VLANs are needed. Wireless devices can be included in a VLAN, but they must be routed through a wireless router that is connected to the LAN.

Question: VLAN Basics


Virtual Local Area Networks (VLANs) divide a single existing physical network into multiple logical networks. Thereby, each VLAN forms its own broadcast domain. Communication between two different VLANs is only possible through a router that has been connected to both VLANs. VLANs behave as if they had been constructed using switches that are independent of each other.

Types of VLANs

In principle, there are two approaches to implementing VLANs:

  • as port-based VLANs (untagged)
  • as tagged VLANs

Port-based VLANs

With regard to port-based VLANs, a single physical switch is simply divided into multiple logical switches. The following example divides an eight-port physical switch (Switch A) into two logical switches.

Eight-port switch with two port-based VLANs

Switch A
Switch-PortVLAN IDConnected device
11(green)PC A-1
2PC A-2
3(not used)
4(not used)
52(orange)PC A-5
6PC A-6
7(not used)
8(not used)

Although all of the PCs have been connected to one physical switch, only the following PCs can communicate with each other due to the configuration of the VLAN:

  • PC A-1 with PC A-2
  • PC A-5 with PC A-6

Assume that there are also four PCs in the neighboring room. PC B-1 and PC B-2 should be able to communicate with PC A-1 and PC A-2 in the first room. Likewise, communication between PC B-5 and PC B-6 in Room 2 and PC A-5 and PC A-6 should be possible.

There is another switch in the second room.

Switch B
Switch-PortVLAN IDConnected device
11(green)PC B-1
2PC B-2
3(not used)
4(not used)
52(orange)PC B-5
6PC B-6
7(not used)
8(not used)

Two cables will be required for connecting both VLANs.

  • One cable from Switch A Port 4 to Switch B Port 4 (for VLAN 1)
  • One from Switch A Port 8 to Switch B Port 8 (for VLAN 2)

Connection of both VLANs to the physical switch. Two cables are required for port-based VLANs.

Note on PVID: For some switches it is necessary to set the PVID (Port VLAN ID) on untagged ports in addition to the VLAN ID of the port. This specifies which VLAN any untagged frames should be assigned to when they are received on this untagged port. The PVID should therefore match the configured VLAN ID of the untagged port.[1][2]

Tagged VLANs

With regard to tagged VLANs, multiple VLANs can be used through a single switch port. Tags containing the respective VLAN identifiers indicating the VLAN to which the frame belongs are attached to the individual Ethernet frames. If both switches understand the operation of tagged VLANs in the example above, the reciprocal connection can be accomplished using one single cable.

Connection of both VLANs to both physical switches using a single cable. VLAN tags (IEEE 802.1q) are used on this cable (or trunk).

Structure of an Ethernet Frame

The VLAN tag is added to an Ethernet Frame by MAC address.


Question: The Difference Between VLANs and Subnets


At a high level, subnets and VLANs are analogous in that they both deal with segmenting or partitioning a portion of the network. However, VLANs are data link layer (OSI layer 2) constructs, while subnets are network layer (OSI layer 3) IP constructs, and they address (no pun intended) different issues on a network. Although it’s a common practice to create a one-to-one relationship between a VLAN and subnet, the fact that they are independent layer 2 and layer 3 constructs adds flexibility when designing a network.

Determine the subnet

Subnets (IPv4 implementation)

An IP address can be logically split (a.k.a. subnetting) into two parts: a network/routing prefix and a host identifier. Network devices that belong to a subnet share a common network/routing prefix in their IP address. The network prefix is determined by applying a bitwise AND operation between the IP address and subnet mask (typically Using an example address of, the network prefix (subnet) is, while the host identifier is

Traffic is exchanged or routed between subnetworks via routers (many modern switches also include router functionality) when the routing/subnet prefixes of the source address and the destination address differ. A router constitutes the logical and/or physical boundary between subnets.

The benefits of subnetting a network vary with each deployment scenario. In large organizations or those using Classless Inter-Domain Routing (CIDR), it’s necessary to allocate address space efficiently. It may also enhance routing efficiency, or have advantages in network management when subnetworks are administered by different internal groups. Subnets can be arranged logically in a hierarchical architecture, partitioning an organization’s network address space into a tree-like routing structure.


A VLAN has the same attributes as a physical local area network, but it allows for devices to be grouped together more easily, even if they are not connected on the same network switch. Separating ports by VLAN groups separates their traffic in a similar fashion to connecting the devices to a separate, distinct switch of their own. VLANs can provide a very high level of security with great flexibility for a comparatively low cost.

Network architects use VLANs to segment traffic for issues such as scalability, security, and network management. Switches can’t (or at least shouldn’t) bridge IP traffic between VLANs because doing so would violate the integrity of the VLAN broadcast domain, so if one VLAN becomes compromised in some fashion, the remainder of the network will not be impeded. Quality of Service schemes can optimize traffic on VLANs for real-time (VoIP) or low-latency requirements (SAN).

Without VLANs, a switch considers all devices on the switch to be in the same broadcast domain, so VLANs can essentially create multiple layer 3 networks on a single physical infrastructure. For example, if a DHCP server is plugged into a switch it will serve any host on that switch that is configured for DHCP. By using VLANs, the network can be easily split up so some hosts will not use that DHCP server and will obtain link-local addresses, or obtain an address from a different DHCP server.

Additional Thoughts

You can have one physical network and configure two or more logical networks by simply assigning different subnets, like and The problem, though, is that both subnets transmit data through the same switch. Traffic going through the switch can be seen by all other hosts, no matter which subnet they’re on. The result is that security is low and there will be less bandwidth available since all traffic uses the same backbone.

As an alternative, you can create a VLAN for each logical network. Bandwidth availability for each VLAN (or logical network) is no longer shared, and security is improved because the switch that connects each VLAN network (in theory…) will not allow traffic to cross between the VLANs.

Usually VLANs are the better choice for many applications, including audio, but there are times when subnetting makes sense. The main reasons are:

  1. Mitigating performance problems because LANs can’t scale indefinitely. Excessive broadcasts or flooding of frames to unknown destinations will limit their scale. Either of these conditions can be caused by making a single broadcast domain in an Ethernet LAN too big. Bandwidth exhaustion (unless it’s caused by broadcast packets or flooding of frames) is not typically solved with VLANs and subnetting, though, since they won’t increase the amount of bandwidth available. It usually happens because of a lack of physical connectivity (too few NICs on a server, too few ports in a group, the need to move up to a faster port speed, etc.). The first step is to monitor network traffic and identify trouble spots. Once you know how traffic moves around on your LAN, you can begin to think about subnetting for performance reasons.
  2. A desire to limit / control traffic moving between hosts at layer 3 or above. If you want to control IP (or TCP, or UDP, etc.) traffic between hosts, rather than attacking the problem at layer 2, you might consider subnetting and adding firewalls / routers with ACLs between the subnets.

Question: Windows 7, Network and Sharing center shows no network, but computer is connected and browsing


Windows 7 Home Premium 32Bit

The Network and Sharing center shows no network connection. Ethernet cable is plugged in, web browsing functions perfectly.

I’m attempting an Anytime Upgrade but it does not attempt to connect if the Network and Sharing center thinks it is not connected.

Only the Network and Sharing center shows something wrong. All network tests pass with flying colors.

IPv6 is disabled.

Have tested each adapter while the other(s) were disabled and show no change.

No warning of Limited Network Access.

IPCONFIG with Wireless adapter disabled:

C:\Users\admin>ipconfig /all

Windows IP Configuration

   Host Name . . . . . . . . . . . . : ITA00000589
   Primary Dns Suffix  . . . . . . . :
   Node Type . . . . . . . . . . . . : Hybrid
   IP Routing Enabled. . . . . . . . : No
   WINS Proxy Enabled. . . . . . . . : No
   DNS Suffix Search List. . . . . . : CM.local

Ethernet adapter Local Area Connection:

   Connection-specific DNS Suffix  . : CM.local
   Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller
   Physical Address. . . . . . . . . : 64-31-50-10-01-33
   DHCP Enabled. . . . . . . . . . . : Yes
   Autoconfiguration Enabled . . . . : Yes
   IPv4 Address. . . . . . . . . . . :
   Subnet Mask . . . . . . . . . . . :
   Lease Obtained. . . . . . . . . . : Wednesday, December 14, 2011 10:31:42 AM
   Lease Expires . . . . . . . . . . : Friday, December 16, 2011 11:55:20 AM
   Default Gateway . . . . . . . . . :
   DHCP Server . . . . . . . . . . . :
   DNS Servers . . . . . . . . . . . :
   NetBIOS over Tcpip. . . . . . . . : Enabled

Tunnel adapter isatap.CM.local:

   Media State . . . . . . . . . . . : Media disconnected
   Connection-specific DNS Suffix  . : CM.local
   Description . . . . . . . . . . . : Microsoft ISATAP Adapter
   Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
   DHCP Enabled. . . . . . . . . . . : No
   Autoconfiguration Enabled . . . . : Yes

Tunnel adapter Local Area Connection* 12:
Media State . . . . . . . . . . . : Media disconnected
   Connection-specific DNS Suffix  . :
   Description . . . . . . . . . . . : Microsoft 6to4 Adapter
   Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
   DHCP Enabled. . . . . . . . . . . : No
   Autoconfiguration Enabled . . . . : Yes

Tunnel adapter Teredo Tunneling Pseudo-Interface:

   Media State . . . . . . . . . . . : Media disconnected
   Connection-specific DNS Suffix  . :
   Description . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface
   Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
   DHCP Enabled. . . . . . . . . . . : No
   Autoconfiguration Enabled . . . . : Yes


1down vote

I’ve seen this a few times, more usually on Vista; and it’s annoying.

The easiest thing I’ve found that ‘fixed it’ in many cases (not all) was to merge and erase all the various network entries/profiles (wired and/or wireless), until there were none.

I’m NOT talking about the networking devices/drivers themselves. Just the various “Home”, “Work”, and “Public” network entries representing your networks.

Reboot, let it rediscover and reconnect to the network(s) (it should ask you which ‘type’ again).

Hopefully it will be less confused after that. 🙂

To do this:

  1. Open “Control Panel”
  2. Select and open “Network and Sharing Center”
  3. Click the “Icon” (like the House icon) under “View your active networks”. This will open the “Set Network Properties” dialog. Here you can rename a network connection or change the icon for that network connection.
  4. Click “Merge or Delete Network Locations” to see a list of stored network connections. You can merge or delete connections here as well as see if a network connection is in use and managed or unmanaged.


Check your network-card drivers. I’ve run into this with older-network cards/drivers several times. More than likely, you need to go to the manufacturer’s website to get the correct driver. Many network adapters will “work” … but because they don’t have the proper bits to tell windows 7/vista that it’s indeed an ethernet adapter… they aren’t treated like normal ethernet network adapters… and are treated more like a generic network interface that could be virtual or some form of tunneling adapter.

Question: Subnetting, netmasks and slash notation


Netmasks are used in ACLs (access control lists), firewalls, routing and subnetting. It involves grouping IP addresses. Each range contains a power of two (1, 2, 4, 8, 16, etc) number of addresses and starts on a multiple (0, 1, 2, 3, etc) of that number of addresses.


Most people are used to class A, B and C networks. These have the following IP address, netmasks and size:

   1st address   Last address      Netmask per network

A: ...
B: ...
C: ... is reserved for the loopback, with network address, netmask and as its broadcast address. is the entire Internet with netmask and as its broadcast address. with netmask is an unconfigued interface. … is used for multicast. … is reserved.

CIDR does not link the number of hosts to the network address, at least not in the strict way that ‘classic’ A, B and C networks do. Furthermore, it doesn’t limit the size to 16M, 64k or 256 IP nrs. Instead, any power of 2 can be used as a size of the network (number of hosts + network address + broadcast address). In other words, CIDR sees an IP address as a 32 bit rather than a 4 byte address.


The following table shows the netmasks in a binary form. The ‘CIDR’ column is the number of ‘1’s from left to right. This also known as ‘slash notation’.

Binary                             Hex        Quad Dec          2ⁿ    CIDR   Number of addresses

00000000000000000000000000000000   00000000           2³²   /0     4,294,967,296     4 G
10000000000000000000000000000000   80000000         2³¹   /1     2,147,483,648     2 G
11000000000000000000000000000000   C0000000         2³⁰   /2     1,073,741,824     1 G
11100000000000000000000000000000   E0000000         2²⁹   /3       536,870,912   512 M
11110000000000000000000000000000   F0000000         2²⁸   /4       268,435,456   256 M
11111000000000000000000000000000   F8000000         2²⁷   /5       134,217,728   128 M
11111100000000000000000000000000   FC000000         2²⁶   /6        67,108,864    64 M
11111110000000000000000000000000   FE000000         2²⁵   /7        33,554,432    32 M
11111111000000000000000000000000   FF000000         2²⁴   /8        16,777,216    16 M
11111111100000000000000000000000   FF800000       2²³   /9         8,388,608     8 M
11111111110000000000000000000000   FFC00000       2²²   /10        4,194,304     4 M
11111111111000000000000000000000   FFE00000       2²¹   /11        2,097,152     2 M
11111111111100000000000000000000   FFF00000       2²⁰   /12        1,048,576     1 M
11111111111110000000000000000000   FFF80000       2¹⁹   /13          524,288   512 k
11111111111111000000000000000000   FFFC0000       2¹⁸   /14          262,144   256 k
11111111111111100000000000000000   FFFE0000       2¹⁷   /15          131,072   128 k
11111111111111110000000000000000   FFFF0000       2¹⁶   /16           65,536    64 k
11111111111111111000000000000000   FFFF8000     2¹⁵   /17           32,768    32 k
11111111111111111100000000000000   FFFFC000     2¹⁴   /18           16,384    16 k
11111111111111111110000000000000   FFFFE000     2¹³   /19            8,192     8 k
11111111111111111111000000000000   FFFFF000     2¹²   /20            4,096     4 k
11111111111111111111100000000000   FFFFF800     2¹¹   /21            2,048     2 k
11111111111111111111110000000000   FFFFFC00     2¹⁰   /22            1,024     1 k
11111111111111111111111000000000   FFFFFE00     2⁹    /23              512
11111111111111111111111100000000   FFFFFF00     2⁸    /24              256
11111111111111111111111110000000   FFFFFF80   2⁷    /25              128
11111111111111111111111111000000   FFFFFFC0   2⁶    /26               64
11111111111111111111111111100000   FFFFFFE0   2⁵    /27               32
11111111111111111111111111110000   FFFFFFF0   2⁴    /28               16
11111111111111111111111111111000   FFFFFFF8   2³    /29                8
11111111111111111111111111111100   FFFFFFFC   2²    /30                4
11111111111111111111111111111110   FFFFFFFE   2¹    /31                2
11111111111111111111111111111111   FFFFFFFF   2⁰    /32                1

What used to be class A is now ‘/8’, B is ‘/16’, C is ‘/24’ and ‘/32’ is the ‘netmask’ for a single host.

Netmasks are used by routers to make routing decisions. For instance;

           Quad Dec        Hex        Binary

Address     C0A80001   1100 0000  1010 1000  0000 0000  0000 0001

Network     C0A80000   1100 0000  1010 1000  0000 0000  0000 0000

Netmask   FFFFFF00   1111 1111  1111 1111  1111 1111  0000 0000

If you want to know if belongs to network simply do a bitwise AND on address and netmask;

 Addr   1100 0000  1010 1000  0000 0000  0000 0001
 Mask   1111 1111  1111 1111  1111 1111  0000 0000
 AND   --------------------------------------------
 Net    1100 0000  1010 1000  0000 0000  0000 0000

This could also be phrased as;

if ( Address & Netmask == Network ) {
     // Belongs to network
} else {
     // Does not belong to network

Which yields;

if ( 0xC0A80001 & 0xFFFFFF00 == 0xC0A80000 ) {
     // Belongs to network
} else {
     // Does not belong to network

Bitwise operators are hardcoded in processors and therefore very efficient.


The bits in the ‘host’ part of a network address are all ‘0’. Bits left of the ‘hosts’ bits can be either ‘0’ or ‘1’ (this is rather like sub netting a classic A, B or C network).

The following table/graph shows a network being split in two smaller networks, then in four, then in eight, then 16, etc.

    0   128   192   224   240   248   252
    Hex Netmask:
    0    80    C0    E0    F0    F8    FC
    Split in:
      2     4     8     16    32    64

    0-+-->0-+-->0-+-->0-+-->0-+-->0-+-->0 (00)    Network address (hex)
      |     |     |     |     |     |
      |     |     |     |     |     +-->4 (04)
      |     |     |     |     |
      |     |     |     |     +-->8-+-->8 (08)
      |     |     |     |           |
      |     |     |     |           +->12 (0C)
      |     |     |     |
      |     |     |     +->16-+->16-+->16 (10)
      |     |     |           |     |
      |     |     |           |     +->20 (14)
      |     |     |           |
      |     |     |           +->24-+->24 (18)
      |     |     |                 |
      |     |     |                 +->28 (1C)
      |     |     |
      |     |     +->32-+->32-+->32-+->32 (20)
      |     |           |     |     |
      |     |           |     |     +->36 (24)
      |     |           |     |
      |     |           |     +->40-+->40 (28)
      |     |           |           |
      |     |           |           +->44 (2C)
      |     |           |
      |     |           +->48-+->48-+->48 (30)
      |     |                 |     |
      |     |                 |     +->52 (34)
      |     |                 |
      |     |                 +->56-+->56 (38)
      |     |                       |
      |     |                       +->60 (3C)
      |     |
      |     +->64-+->64-+->64-+->64-+->64 (40)
      |           |     |     |     |
      |           |     |     |     +->68 (44)
      |           |     |     |
      |           |     |     +->72-+->72 (48)
      |           |     |           |
      |           |     |           +->76 (4C)
      |           |     |
      |           |     +->80-+->80-+->80 (50)
      |           |           |     |
      |           |           |     +->84 (54)
      |           |           |
      |           |           +->88-+->88 (58)
      |           |                 |
      |           |                 +->92 (5C)
      |           |
      |           +->96-+->96-+->96-+->96 (60)
      |                 |     |     |
      |                 |     |     +>100 (64)
      |                 |     |
      |                 |     +->104+>104 (68)
      |                 |           |
      |                 |           +>108 (6C)
      |                 |      
      |                 +>112-+->112+>112 (70)
      |                       |     |
      |                       |     +>116 (74)
      |                       |
      |                       +->120+>120 (78)
      |                             |
      |                             +>124 (7C)
      +->128+->128+->128+->128+->128+>128 (80)
            |     |     |     |     |
            |     |     |     |     +>132 (84)
            |     |     |     |
            |     |     |     +->136+>136 (88)
            |     |     |           |
            |     |     |           +>140 (8C)
            |     |     |
            |     |     +->144+->144+>144 (90)
            |     |           |     |
            |     |           |     +>148 (94)
            |     |           |
            |     |           +->152+>152 (98)
            |     |                 |
            |     |                 +>156 (9C)
            |     |
            |     +->160+->160+->160+>160 (A0)
            |           |     |     |
            |           |     |     +>164 (A4)
            |           |     |
            |           |     +->168+>168 (A8)
            |           |           |
            |           |           +>172 (AC)
            |           |
            |           +->176+->176+>176 (B0)
            |                 |     |
            |                 |     +>180 (B4)
            |                 |
            |                 +->184+>184 (B8)
            |                       |
            |                       +>188 (BC)
            +->192+->192+->192+->192+>192 (C0)
                  |     |     |     |
                  |     |     |     +>196 (C4)
                  |     |     |
                  |     |     +->200+>200 (C8)
                  |     |           |
                  |     |           +>204 (CC)
                  |     |
                  |     +->208+->208+>208 (D0)
                  |           |     |
                  |           |     +>212 (D4)
                  |           |
                  |           +->216+>216 (D8)
                  |                 |
                  |                 +>220 (DC)
                  +->224+->224+->224+>224 (E0)
                        |     |     |
                        |     |     +>228 (E4)
                        |     |
                        |     +->232+>232 (E8)
                        |           |
                        |           +>236 (EC)
                        +->240+->240+>240 (F0)
                              |     |
                              |     +>244 (F4)
                              +->248+>248 (F8)
                                    +>252 (FC)

Example: Subnetting a /24 in a table.

In the example above the smallest network is four successive IP addresses. If you want even smaller ranges, below is an example for ‘248’ beeing split in two and then four;

    Netmask:    252   254   255
    Hex mask:   FC    FE    FF

                248+->248+->248 (F8)
                   |     |
                   |     +->249 (F9)
                   +->250+->250 (FA)
                         +->251 (FB)

IPv6 slash notation

IPv6 works the same way. The numbers are just bigger.

Per bit

Netmask binary     Hex    /

0000000000000000   0000   /0
1000000000000000   8000   /1
1100000000000000   c000   /2
1110000000000000   e000   /3
1111000000000000   f000   /4
1111100000000000   f800   /5
1111110000000000   fc00   /6
1111111000000000   fe00   /7
1111111100000000   ff00   /8
1111111110000000   ff80   /9
1111111111000000   ffc0   /10
1111111111100000   ffe0   /11
1111111111110000   fff0   /12
1111111111111000   fff8   /13
1111111111111100   fffc   /14
1111111111111110   fffe   /15
1111111111111111   ffff   /16

‘ffff’ in IPv6 is the same as ‘255.255’ in IPv4.

Per 16 bits

Netmask                                   /      2ⁿ     Number of addresses                                           Number of /64s

0000:0000:0000:0000:0000:0000:0000:0000   /0     2¹²⁸   340,282,366,920,938,463,463,374,607,431,768,211,456            16 E
ffff:0000:0000:0000:0000:0000:0000:0000   /16    2¹¹²         5,192,296,858,534,827,628,530,496,329,220,096           256 T
ffff:ffff:0000:0000:0000:0000:0000:0000   /32    2⁹⁶                 79,228,162,514,264,337,593,543,950,336             4 G
ffff:ffff:ffff:0000:0000:0000:0000:0000   /48    2⁸⁰                      1,208,925,819,614,629,174,706,176     1 Y    64 k
ffff:ffff:ffff:ffff:0000:0000:0000:0000   /64    2⁶⁴                             18,446,744,073,709,551,616    16 E     1
ffff:ffff:ffff:ffff:ffff:0000:0000:0000   /80    2⁴⁸                                    281,474,976,710,656   256 T
ffff:ffff:ffff:ffff:ffff:ffff:0000:0000   /96    2³²                                          4,294,967,296     4 G
ffff:ffff:ffff:ffff:ffff:ffff:ffff:0000   /112   2¹⁶                                                 65,536    64 k
ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff   /128   2⁰                                                       1     1

‘:0000:’ can be written as ‘:0:’. And the longest sequence of zeros as ‘::’.

Since the IPv6 internet is 2000::/3 (2000:0000:0000:0000:0000:0000:0000:0000 to 3fff:ffff:ffff:ffff:ffff:ffff:ffff:ffff), the number of available addresses is 2¹²⁵ = 42,535,295,865,117,307,932,921,825,928,971,026,432.

/56 and /60

Some ISPs provide a /56 or a /60 instead of a /48;

Netmask                                   /      2ⁿ     Number of addresses                 Number of /64s

ffff:ffff:ffff:0000:0000:0000:0000:0000   /48    2⁸⁰    1,208,925,819,614,629,174,706,176   65356
ffff:ffff:ffff:ff00:0000:0000:0000:0000   /56    2⁷²        4,722,366,482,869,645,213,696     256
ffff:ffff:ffff:fff0:0000:0000:0000:0000   /60    2⁶⁸          295,147,905,179,352,825,856      16
ffff:ffff:ffff:ffff:0000:0000:0000:0000   /64    2⁶⁴           18,446,744,073,709,551,616       1

A /48 is 2¹⁶ = 65,536 successive /64s. A /56 is 2⁸ = 256 successive /64s. A /60 is 2⁴ = 16 successive /64s.


Some advocate the use of /120s. A /120 is the same size as an IPv4 /24; 256 addresses;

Netmask                                   /      2ⁿ     Number of addresses

ffff:ffff:ffff:ffff:ffff:ffff:ffff:ff00   /120  2⁸ 256

The idea is only to use 256 addresses out of a /64 and firewall the rest in order to avoid NDP (Neighbour Discovery Protocol) exhaustion attacks.

Combine host and network in one statement

Suppose I have a host ‘2001:db8:1234:1::1/128’ and a network ‘2001:db8:1234:1::/64’. One can combine both statements (EG in ifconfig) in one statement; ‘2001:db8:1234:1::1/64’.


Question: Net Masks and the Subnet Calculator

Determining the proper mask value to assign to router and client IP addresses is sometimes difficult. You are usually pretty safe using for your IPNetRouter gateway’s private subnet, especially if you never intend to have more than 254 unique LAN clients on your LAN. The approved private LAN network ranges are described in RFC-1918.

In the simple case, if you lower the number of the subnet mask, the more open (or greater) the number of valid IP address in a subnetwork. Let’s start with the standard, typical mask for a home LAN, It typically permits 254 clients on a LAN connected to the IPNetRouter gateway (eg x.y.z.1-x.y.z.254 are good IPs to use on the x.y.z subnet with mask; x.y.z.0 and x.y.z.255 are generally not because of the way IP routing works). If you up the last number of the subnet mask you lower the number of clients permitted on your LAN. For instance, if you set it to only three LAN clients and the gateway (four IP addresses) will be permitted to communicate with one another on that particular subnet. To route properly, the router should be one of the IP addresses in the same subnet as the clients.

If you understand binary operations the above will make more sense since the number of clients on a subnet is limited by performing a binary AND operation between the subnet mask and a given IP address.

Using the Subnet Calculator Tool

Using the Subnet Calculator tool in IPNetRouter or IPNetMonitor, you can see how many clients can be supported on an IP subnet based on a particular subnet mask. The prefix length set in the subnet calculator is equivalent to the shorthand value in the following table:

IP addressNet MaskMask Binary ShorthandResulting network number (254 hosts) (65533 hosts) (1 host) (the identity mask) (128 hosts) (4 hosts) (4 hosts) (4 hosts)

By experimenting with the last IP address in the example, you can see how the subnet and client ID can change by altering the mask while the IP address remains constant. It is the network number that is used to determine whether a client is on the same or a different subnet when determining whether to broadcast an IP packet to the local network or not.

For each increase in the shorthand mask number by one, halve the number of available clients for your local LAN. For each decrease of one in the mask (again, using the “/” syntax), the number of permitted clients on the LAN is doubled. This is a simplistic explanation, good enough for handling a subnet like with a mask short hand value of /24 thru /32 (long hand thru 255). The subnet calculator can determine the range of the clients local network by its IP address and network mask. Shorthand “/30” represents a sublan of four machines (hosts) with a network number determined by the machines IP address; shorthand “/31” is for a subnet of two clients; shorthand “/29” is for a network of eight clients, etc.

Some of the interfaces in IPNetRouter support the “/” syntax for masks, others support the “” type syntax. Using the Subnet Calculator, you can automatically do the conversion between the two without much hassle.

For filtering of IP packets, the net mask is used to designate a range of IP addresses to apply the filter to. In the last example, through .83 would be filtered if a “/30” mask was applied to

If you want to know more about network masks, RFC-950 is a good starting point. See the help text for the Subnet Calculator for more information on how it works.

Binary Subnet Masks and Routing–the Short Version

(The Internet was designed by mathematician’s and people with strong mathematics backgrounds. If you are not well-versed in binary number theory but are interested in how routing really works, the best thing to find an easy guide to the Internet–your local librarian or bookstore may be able to recommend such a book (we hope). Maybe someday it will be easier. For now…)

If any 32-bit IP address is ANDed with (the equivalent of 24 “1” bits followed by eight “0” bits), you are left with only 255 valid client IDs in a given subnet (actually 254 since the all 1s and all 0s client host numbers are typically reserved). ANDing with an IP address, only four addresses will be valid for the local subnet. Doesn’t make sense? Well, think of it this way. The destination address and the origination IP address are each ANDed with the origination IPs mask for any packet sent. The results of the two operations are then compared. The masks obliterate the client IDs (still kept in the packet header) and then are compared with one another. The following two examples take place on the originating host.

Destination of an IP datagram is on the same LAN

Origination is, mask is, the AND operation gives

Destination is, mask is, the AND operation gives

Since the packets originate on the same subnet, the machine sends the packet out on the LAN without asking the router what to do–its a local neighborhood destination (Yep, you don’t need a router if you use the same network and masks for a local LAN when using straight IP addressing.)

Destination and originating hosts are on different LANs

Origination is, mask is, the AND operation gives

Destination is, mask is, the AND operation gives*

Since the source and destination networks are different the packet is sent to the router for further handling. (*NOTE: the origination mask is used for mask calculations to avoid problems when using different masks on the same subnetwork; if the sending host determines that the IP packet it is about to send is not on its subnet, it should send the packet to a router/gateway for handling.)

In the instance of an address with a mask of, there are only four local host IPs that are within the same subnetwork. All other addresses will result in the packet being sent to the local router for handling. The last number, 252, is equivalent to 11111100 in binary.

Question: Do all the subnets in a network have to have the same subnet mask?


Subnets can have different masks, it’s called VLSM (see Classless Inter-Domain Routing).

In your example you specified host addresses, not networks, since the host part of the IP addresses is not zero, and obviously /192 was meant to be /26. If we round the IP addresses to networks we get, and – they overlap.

Overlapping subnets can be present in the routing table at the same time if their prefix length (netmask) is different. The router will select matched route with the longest prefix when deciding where to route a packet.

So destination IP will match the first route, will match the second and will match the third.

Answer:        i.e. /30        i.e. /30          i.e. /29

           Above mask assignment is fine because none of them overlap and address that you’ve mentioned are included as well.

It depends on block size you choose, overlap doesn’t work. Subnet’s network ID should be a multiple of a block size, starting from anywhere in the middle won’t work.

/30 =, Block size: 4, Subnets: 0,4,8,12,16,20,24….

/28 =, Block size: 16, Subnets: 0,16,32,48,64,80….

/26 =, Block size: 64, Subnets: 0,64,128,192…

When applying variable length subnet mask you can use same class address but the block size has to vary in order to avoid overlap.


In your example –  is first host address for network is host address for network with mask [Depending on where you apply this, it may cause overlap with above] is a host address for network with mask [This overlaps with previous networks]

Careful when applying variable length subnet mask, it is there to save address space and not to overlap networks


if im not wrong that you are asking about different networks can have to be same subnet mask or not.

so, my answer is no. It doesnt require to have same subnet mask for different networks.

You can choose different subnet mask for different networks.

Question: Classful network

classful network is a network addressing architecture used in the Internet from 1981 until the introduction of Classless Inter-Domain Routing in 1993. The method divides the IP address space for Internet Protocol version 4 (IPv4) into five address classes based on the leading four address bits. Classes A, B, and C provide unicast addresses for networks of three different network sizes. Class D is for multicast networking and the class E address range is reserved for future or experimental purposes.

Since its discontinuation, remnants of classful network concepts have remained in practice only in limited scope in the default configuration parameters of some network software and hardware components, most notably in the default configuration of subnet masks.


In the original address definition, the most significant eight bits of the 32-bit IPv4 address was the network number field which specified the particular network a host was attached to. The remaining 24 bits specified the local address, also called rest field (the rest of the address), which uniquely identified a host connected to that network.[1] This format was sufficient at a time when only a few large networks existed, such as the ARPANET (network number 10), and before the wide proliferation of local area networks (LANs). As a consequence of this architecture, the address space supported only a low number (254) of independent networks. It became clear early in the growth of the network that this would be a critical scalability limitation.[citation needed]

Introduction of address classes[edit]

Expansion of the network had to ensure compatibility with the existing address space and the IPv4 packet structure, and avoid the renumbering of the existing networks. The solution was to expand the definition of the network number field to include more bits, allowing more networks to be designated, each potentially having fewer hosts. Since all existing network numbers at the time were smaller than 64, they had only used the 6 least-significant bits of the network number field. Thus it was possible to use the most-significant bits of an address to introduce a set of address classes while preserving the existing network numbers in the first of these classes.[citation needed]

The new addressing architecture was introduced by RFC 791 in 1981 as a part of the specification of the Internet Protocol.[2] It divided the address space into primarily three address formats, henceforth called address classes, and left a fourth range reserved to be defined later.

The first class, designated as Class A, contained all addresses in which the most significant bit is zero. The network number for this class is given by the next 7 bits, therefore accommodating 128 networks in total, including the zero network, and including the IP networks already allocated. A Class B network was a network in which all addresses had the two most-significant bits set to 1 and 0 respectively. For these networks, the network address was given by the next 14 bits of the address, thus leaving 16 bits for numbering host on the network for a total of 65536 addresses per network. Class C was defined with the 3 high-order bits set to 1, 1, and 0, and designating the next 21 bits to number the networks, leaving each network with 256 local addresses.

The leading bit sequence 111 designated an at-the-time unspecified addressing mode (“escape to extended addressing mode“),[2] which was later subdivided as Class D (1110) for multicast addressing, while leaving as reserved for future use the 1111 block designated as Class E.[3]

Classful addressing definition[edit]

ClassLeadingbitsSize of network
 bit field
Size of restbit fieldNumberof networksAddressesper networkTotal addressesin classStart addressEnd addressDefault subnet mask in dot-decimal notationCIDR notation
Class A0824128 (27)16,777,216 (224)2,147,483,648 (231)[a]
Class B10161616,384 (214)65,536 (216)1,073,741,824 (230)
Class C1102482,097,152 (221)256 (28)536,870,912 (229)
Class D (multicast)1110not definednot definednot definednot defined268,435,456 (228) definednot defined
Class E (reserved)1111not definednot definednot definednot defined268,435,456 (228) definednot defined

The number of addresses usable for addressing specific hosts in each network is always 2N – 2, where N is the number of rest field bits, and the subtraction of 2 adjusts for the use of the all-bits-zero host portion for network address and the all-bits-one host portion as a broadcast address. Thus, for a Class C address with 8 bits available in the host field, the maximum number of hosts is 254.

Today, IP addresses are associated with a subnet mask. This was not required in a classful network because the mask was implicitly derived from the IP address itself; Any network device would inspect the first few bits of the IP address to determine the class of the address.

The blocks numerically at the start and end of classes A, B and C were originally reserved for special addressing or future features, i.e., and are reserved in former class A; and were reserved in former class B but are now available for assignment; and are reserved in former class C. While the network is a Class A network, it is designated for loopback and cannot be assigned to a network.[4]

Class D is reserved for multicast and cannot be used for regular unicast traffic.

Class E is reserved and cannot be used on the public Internet. Many older routers will not accept using it in any context.[citation needed]

Bit-wise representation[edit]

In the following table:

  • n indicates a bit used for the network ID.
  • H indicates a bit used for the host ID.
  • X indicates a bit without a specified purpose.
Class A
  0.  0.  0.  0 = 00000000.00000000.00000000.00000000 = 01111111.11111111.11111111.11111111

Class B
128.  0.  0.  0 = 10000000.00000000.00000000.00000000 = 10111111.11111111.11111111.11111111

Class C
192.  0.  0.  0 = 11000000.00000000.00000000.00000000 = 11011111.11111111.11111111.11111111

Class D
224.  0.  0.  0 = 11100000.00000000.00000000.00000000 = 11101111.11111111.11111111.11111111

Class E
240.  0.  0.  0 = 11110000.00000000.00000000.00000000 = 11111111.11111111.11111111.11111111

Replacement methods[edit]

The first architecture change extended the addressing capability in the Internet, but did not prevent IP address exhaustion. The problem was that many sites needed larger address blocks than a Class C network provided, and therefore they received a Class B block, which was in most cases much larger than required. In the rapid growth of the Internet, the pool of unassigned Class B addresses (214, or about 16,000) was rapidly being depleted. Classful networking was replaced by Classless Inter-Domain Routing (CIDR), starting in 1993 with the specification of RFC 1518and RFC 1519, to attempt to solve this problem.

Before the introduction of address classes, the only address blocks available were what later became known as Class A networks.[5] As a result, some organizations involved in the early development of the Internet received address space allocations far larger than they would ever need.

Question: What happanes when IP address of two computer are same but different subnet masks?


The question is simple but the answer is tricky and lengthy :

If you are using DHCP on your router for address assignment, then NEVER EVER any router would assign a bad or out-of-the-subnet IP to any host in that particular subnet. PERIOD. The address assignment by any router would be perfect.

If a human assigns a bad IP intentionally which belongs to any other subnet, then

  1. a packet destined for you computer which has a bad IP, would not reach your subnet, the router will route it to the proper subnet because all routing protocols use LMF(LONGEST MATCH FIRST) RULE, in which the router searches for largest CIDR value. This is completely logically correct, imagine the following. A /28 subnet has small subnet than a /27 subnet, so it will find smallest possible aubnet first and route the packet to it.
  2. Any packet originating from your “bad IP PC” will reach the internet server but any reply from it will not reach you, because as I said the router will forward it to other subnet, not you.


It’s practically not possible that both the system will have same ip address. Possibly subnet mask can be same ..otherwise both pc cannot communicate with same Ip ..dulplicacy will occur in every case


There will be a conflict between the two. If the two computers are on the same LAN network then you would be prompted with a duplicate IP message or IP address already exists in network message. If the two computers are on different LAN segments then the two wont be able to communicate with each other. When data would be destined to the same IP the computer would think that it is its own address and will not forward the data to the gateway. It does not look at the subnet mask because the destination IP address is its own nick address.


when two sys hving same ip with diff subnet then that two sys cnt communicte with each other. Bt they can comunicte with hving same subnet mask sys.

Question: Is there any way two computers in two different subnets can communicate?


There has to be a router or a Layer3 switch that does inter-vlan routing

Question: How to Connect Computers That Are on 2 Different Subnets


Subnetworks, or subnets, are created by taking a single private address range and dividing it into multiple separate networks using a subnet mask. Such division is often used in large companies to help network administrators divide access between different sensitive, network resources. Computers located on different subnets may need to communicate directly with one another. Accomplishing this requires that the two machines be connected to a router, which can forward information based on routable IP addresses.

Step 1

Connect the computers to the network. Ensure that each connection eventually reaches a router or a routable switch.

Step 2

Connect the routers to each other. This step is only necessary if the two separate subnets are connected to two physically separate routers. If the two routers do not have an available, routable interace, they must be connected to a third, interim “core” router, designed to handle routing between the other routers and anything outside of those networks.

Step 3

Enable a routing protocol in each subnet’s router. Options include Routing Information Protocol (RIP), Open Shortest Path First (OSPF) or, on Cisco-based switches, Interior Gateway Routing Protocol (IGRP).

Step 4

Allow time for the routing tables to update. Routing protocols advertise to neighboring routers the networks to which they are they are directly connected. In this way, each routers gets an images of networks to which they are indirectly connected (i.e. they are connected to a router which is connected to a destination network). When all directly attached routers have up-to-date information about neighboring routers and their attached networks, this is referred to as “convergence.” The more complex the network, the longer it takes for convergence to occur.

Step 5

Log into one of the computers on a subnet and issue a trace route command to the computer on the other subnet. This will show you that communication is functioning properly and that the information is taking the appropriate path (each routed interface, or “hop,” will be listed as part of the route the packet took). To issue a traceroute in Windows, open the command prompt and type “tracert [IP address]”, where [IP address] is the address of the computer on the other subnet.

Question: How to set two different subnets to communicate?

I know this is probably a very simple question but how do I have a LAN that has two different subnets that can seamlessly communicate with each other. I am thinking if you get too many clients that they use up all the IP addresses in a specific subnet and need to expand to a new subnet.

I know this probably is just some settings in the router but I have never done it.


I am buchering this, but this is a stepping stone to help you understand the difference

Example 1…

LAN 1:

IP range = 192.168.1.x

Subnet mask:

Default gateway =

LAN 2:

IP range = 192.168.2.x

Subnet mask:

Default gateway =

…for these two to talk, the default gateway of each is actually a single router or a switch that’s capable of routing multiple subnets. There is usually another router/firewall then capable of traffic going out to the Internet.

…if these two LANs are in physically different buildings, then usually a L2L VPN tunnel and NAT routing is used so they can talk.


Example 2…


IP range = 192.168.x.x

Subnet mask:

Default gateway = 192.168.x.x

…because the subnet mask allows it, more than 254 IPs are available across the LAN, such as 192.168.1.x, 192.168.2.x, 192.168.3.x, etc. Some SOHO/residential grade routers will not allow/work if you try and set a subnet other than

Chances are, you wouldn’t really want to make your network this big with a SM of so here are some other options… = 510 addresses = 1022 addresses = 2046 addresses = 4094 addresses

…there are more options, see » ··· cidr.php

and »

If you are re-doing your LAN because you have run out of addresses, and your going to implement Example 2, then all things must be changed to use the new subnet mask (don’t leave existing things with or you will have problems)

If this is not what you were getting at, please explain what your trying to do.


This depends on your exact setup. From your description I’ll assume you are looking at what Cisco calls “secondary” addressing, linux/bsd/etc call it aliases… this is two (or more) subnets natively on the same lan. You simply need a router that can hairpin traffic — which is a violation of RFCs, but just about everything will do it.

If each subnet is within a VLAN, then it’s a pure routing setup. There’s no layer 3 overlap.

[*OR* you can tell every machine about all of those networks. This is usually not entirely possible. DHCP clients being the hardest to setup.]


this is exactly what I was looking for. This should get me started.


for these two to talk, the default gateway of each is actually a single router or a switch that’s capable of routing multiple subnets. There is usually another router/firewall then capable of traffic going out to the Internet.Unless the router has multiple interfaces. Which many do….if these two LANs are in physically different buildings, then usually a L2L VPN tunnel and NAT routing is used so they can talk.Or any one of dozens of different types of dedicated point to point connections. The tunnel is convenient if they are both on the internet, but if they aren’t, then it’s not practical. I’d venture a guess to say that the majority of connections between routers that aren’t handling general internet traffic is handling traffic over dedicated links of various types, not VPN traffic.


Let me expand my question one more step. Let us say that I have a client machine that is on a LAN and it’s IP address is but all of my other computers are on a seperate LAN, What settings do I need to input on the router of the 10.169.169.* LAN so that the machines can communicate with the machine?


Let me expand my question one more step. Let us say that I have a client machine that is on a LAN and it’s IP address is but all of my other computers are on a seperate LAN, What settings do I need to input on the router of the 10.169.169.* LAN so that the machines can communicate with the machine? machines will either need a route setup on each machine pointing to the router that handles traffic destined to the subnet that contains, or the default gateway will need a route pointing to that router (if it doesn’t handle it already).


Sorry I forgot to add one more thing. I do not want the machine to be able to see the other machines on the other subnet. Only one way traffic if that makes sense?


Sorry I forgot to add one more thing. I do not want the machine to be able to see the other machines on the other subnet. Only one way traffic if that makes sense?

No that doesn’t. Presuming TCP communications, such an implementation would not work. The receiver must ACKnowledge the packets received. Without that, there will be no flow of data.


Yes I agree that the ACK would not occur. I am trying to figure out the best way for the 10.169.169.* machines to communicate with the 10.227.0.* subnet, specifically the machine without putting the 10.169.169.* subnet at risk for exposure on the other network. Does that make sense?


Yes I agree that the ACK would not occur. I am trying to figure out the best way for the 10.169.169.* machines to communicate with the 10.227.0.* subnet, specifically the machine without putting the 10.169.169.* subnet at risk for exposure on the other network. Does that make sense?

You firewall it off. You block all ports unless they absolutely need to be open. You create a web service so communications go through essentially a proxy and not directly.

You haven’t specified what you are trying to send. Having a UDP video stream is quite a bit different than an interactive telnet session is quite a bit different than a HTTP request.


Sorry for not being more specific. Basically the only thing I would need to access on that machine is a web interface GUI that utilizes a specific port number to access. So at the firewall level set it up to block everything except for the port number to interface with the GUI?


If both 10.169.169.x and 10.227.0.x machines are in different interfaces of the same firewall/router, then just use NAT with specific port ACLs to allow them to talk.

I can’t get more detailed then that because you haven’t said what mfg./equipment your using.


Why NAT? Just ACLs should do it, once the routers are configured correctly, no?


True, depending on equipment…

I frequent Cisco ASA’s which under v8.3 and later, basically everything becomes a NAT statement -hey, not my idea!

Question: Connecting two hosts with the same IP address but different subnet masks

Is it possible to communicate two hosts with the same IP address but different subnet mask without adding a router?


Is it possible to communicate two hosts with the same IP address but different subnet mask without adding a router? If so, what is the configuration?

Generally speaking, no two devices should have the same IP address unless they are behind a NAT device. Computers need routers to communicate with devices that are not on their same logical subnet. When one computer prepares to communicate with another, it basically goes through three steps to determine if the communication is local or gets addressed to the router. This routing information can be seen by issuing the “route print” command from the cmd prompt. These three steps include:

  1. Are both addresses the same class?Source 200 120 3 72Target 200 120 3 41YES
  2. Do both share the same network address?Source 200 120 3 72Mask 255 255 255 224Target 200 120 3 41Mask 255 255 255 224Binary = 11100000YES
  3. Are both on the same logical subnet?Source 1st Address 72 = 01001000Mask 224 = 11100000Target 2nd Address 41= 00101001NO

Since the first three bits listed above do not match, the sending computer now knows that the computer it wishes to communicate with is not local, therefore the router will receive the data and is responsible for delivery. I hope this helps you better understand the routing process.

Question: How can I connect two networks with different IP addresses?

I have a router with IP addresses 192.168.0.x. (1st network).

Connected to a LAN connection is a second router with addresses 192.168.1.x. (2nd network).

How do I make it possible for both networks to see each other, and to have internet from the 2nd as well as the 1st?


 if you simply change the subnet mask from to

So the network range is: to you use a subnet mask of that case, the network range is in your case: to


The idea is good but I’m tempted to say it won’t work since most home routers don’t allow a Class C network ( being divided with a Class B CIDR/Subnetmask (CIDR /23,

Even if they do it’s still not advisable because older devices could face routing problems within the supernet. Also both Routers would need to support RIPv2


For internet access all you have to do is properly setup the second router:

connect the WAN port to the first router

set the WAN interface to either DHCP or manual/Static (whatever is available)

for manual or static the following needs to be done:

set the WAN IP Address and Subnetmask to one on the first network (e.g.

set the WAN Gateway and DNS-Server to the first routers IP (e.g.

The Ethernet side of the second Router should be setup as usual

IP Address of the Router e.g.

DHCP enabled, handing out e.g. with Gateway and DNS-Server

This is a very basic setup using the NAT feature of the Router and will allow all your clients to access the internet. 

If you need networking features between clients on both networks you will either have to enable advanced routing on the first router and add the appropriate routes to the network behind the second Router or use the easier and better option by combining both networks into one 192.168.0.x

Question: What is the subnet mask?

A. As has been shown the IP address consists of 4 octets and is usually displayed in the format, however this address on its own does not mean much and a subnet mask is required to show which part of the IP address is the Network ID, and which part the Host ID. Imagine the Network ID as the road name, and Host ID as the house number, so with “54 Grove Street”, 54 would be the Host ID, and Grove Street the Network ID. The subnet mask shows which part of the IP address is the Network ID, and which part is the Host ID.

For example, with an address of, and a subnet mask of, the Network ID is 200.200.200, and the Host ID is 5. This is calculated using the following:

IP Address11001000110010001100100000000101
Subnet Mask11111111111111111111111100000000
Network ID11001000110010001100100000000000
Host ID00000000000000000000000000000101

 What happens is a bitwise AND operation between the IP address and the subnet mask, e.g.

1 AND 1=1

1 AND 0=0

0 AND 1=0

0 AND 0=0

There are default subnet masks depending on the class of the IP address as follows:

Class A : to uses subnet mask as default

Class B : to uses subnet mask as default

Class C : to uses subnet mask as default

Where’s ??? This is a reserved address that is used for testing purposes. If you ping you will ping yourself 🙂

The subnet mask is used when two hosts communicate. If the two hosts are on the same network then host a will talk directly to host b, however if host b is on a different network then host a will have to communicate via a gateway, and the way host a can tell if it is on the same network is using the subnet mask. For example

A. As has been shown the IP address consists of 4 octets and is usually displayed in the format, however this address on its own does not mean much and a subnet mask is required to show which part of the IP address is the Network ID, and which part the Host ID. Imagine the Network ID as the road name, and Host ID as the house number, so with “54 Grove Street”, 54 would be the Host ID, and Grove Street the Network ID. The subnet mask shows which part of the IP address is the Network ID, and which part is the Host ID.

For example, with an address of, and a subnet mask of, the Network ID is 200.200.200, and the Host ID is 5. This is calculated using the following:

IP Address11001000110010001100100000000101
Subnet Mask11111111111111111111111100000000
Network ID11001000110010001100100000000000
Host ID00000000000000000000000000000101

 What happens is a bitwise AND operation between the IP address and the subnet mask, e.g.

1 AND 1=1

1 AND 0=0

0 AND 1=0

0 AND 0=0

There are default subnet masks depending on the class of the IP address as follows:

Class A : to uses subnet mask as default

Class B : to uses subnet mask as default

Class C : to uses subnet mask as default

Where’s ??? This is a reserved address that is used for testing purposes. If you ping you will ping yourself 🙂

The subnet mask is used when two hosts communicate. If the two hosts are on the same network then host a will talk directly to host b, however if host b is on a different network then host a will have to communicate via a gateway, and the way host a can tell if it is on the same network is using the subnet mask. For example

Host A

Host B

Host C

Subnet Mask

If Host A communicates with Host B, they are both have Network ID 200.200.200 so Host A communicates directly to Host B. If Host A communicates with Host C they are on different networks, 200.200.200 and 200.200.199 respectively so Host A would send via a gateway.

Question: Classless Inter-Domain Routing

Classless Inter-Domain Routing (CIDR /ˈsaɪdər, ˈsɪ-/) is a method for allocating IP addresses and IP routing. The Internet Engineering Task Force introduced CIDR in 1993 to replace the previous addressing architecture of classful network design in the Internet. Its goal was to slow the growth of routing tables on routers across the Internet, and to help slow the rapid exhaustion of IPv4 addresses.[1][2]

IP addresses are described as consisting of two groups of bits in the address: the most significant bits are the network prefix, which identifies a whole network or subnet, and the least significantset forms the host identifier, which specifies a particular interface of a host on that network. This division is used as the basis of traffic routing between IP networks and for address allocation policies.

Whereas classful network design for IPv4 sized the network prefix as one or more 8-bit groups, resulting in the blocks of Class A, B, or C addresses, Classless Inter-Domain Routing allocates address space to Internet service providers and end users on any address bit boundary. In IPv6, however, the interface identifier has a fixed size of 64 bits by convention, and smaller subnets are never allocated to end users.

CIDR encompasses several concepts. It is based on the variable-length subnet masking (VLSM) technique, which allows the specification of arbitrary-length prefixes. CIDR introduced a new method of representation for IP addresses, now commonly known as CIDR notation, in which an address or routing prefix is written with a suffix indicating the number of bits of the prefix, such as for IPv4, and 2001:db8::/32 for IPv6. CIDR introduced an administrative process of allocating address blocks to organizations based on their actual and short-term projected needs. The aggregation of multiple contiguous prefixes resulted in supernets in the larger Internet, which whenever possible are advertised as aggregates, thus reducing the number of entries in the global routing table.


An IP address is interpreted as composed of two parts: a network-identifying prefix followed by a host identifier within that network. In the previous classful network architecture, IP address allocations were based on the bit boundaries of the four octets of an IP address. An address was considered to be the combination of an 8, 16, or 24-bit network prefix along with a 24, 16, or 8-bit host identifier respectively. Thus, the smallest allocation and routing block contained only 256 addresses—too small for most enterprises, and the next larger block contained 65536addresses—too large to be used efficiently even by large organizations. This led to inefficiencies in address use as well as inefficiencies in routing, because it required a large number of allocated class-C networks with individual route announcements, being geographically dispersed with little opportunity for route aggregation.

During the first decade of the Internet after the invention of the Domain Name System (DNS) it became apparent that the devised system based on the classful network scheme of allocating the IP address space and the routing of IP packets was not scalable.[3] This led to the successive development of subnetting and CIDR. The network class distinctions were removed, and the new system was described as being classless, with respect to the old system, which became known as classful. In 1993, the Internet Engineering Task Force published a new set of standards, RFC 1518 and RFC 1519, to define this new concept of allocation of IP address blocks and new methods of routing IPv4 packets. An updated version of the specification was published as RFC 4632 in 2006.[4]

Classless Inter-Domain Routing is based on variable-length subnet masking (VLSM), which allows a network to be divided into variously sized subnets, providing the opportunity to size a network more appropriately for local needs. Variable-length subnet masks are mentioned in RFC 950.[5] Accordingly, techniques for grouping addresses for common operations were based on the concept of cluster addressing, first proposed by Carl-Herbert Rokitansky.[6][7]

CIDR notation[edit]

CIDR notation is a compact representation of an IP address and its associated routing prefix. The notation is constructed from an IP address, a slash (‘/’) character, and a decimal number. The number is the count of leading 1 bits in the subnet mask. Larger values here indicate smaller networks. The maximum size of the network is given by the number of addresses that are possible with the remaining, least-significant bits below the prefix.

The IP address is expressed according to the standards of IPv4 or IPv6. The address may denote a single, distinct interface address or the beginning address of an entire network. The aggregation of these bits is often called the host identifier.

For example:

  • represents the IPv4 address and its associated routing prefix, or equivalently, its subnet mask, which has 24 leading 1-bits.
  • the IPv4 block represents the 1024 IPv4 addresses from to
  • the IPv6 block 2001:db8::/48 represents the block of IPv6 addresses from 2001:db8:0:0:0:0:0:0 to 2001:db8:0:ffff:ffff:ffff:ffff:ffff.
  • ::1/128 represents the IPv6 loopback address. Its prefix length is 128 which is the number of bits in the address.

Before the implementation of CIDR, IPv4 networks were represented by the starting address and the subnet mask, both written in dot-decimal notation. Thus, was often written as

The number of addresses of a subnet may be calculated as 2address length − prefix length, in which the address length is 128 for IPv6 and 32 for IPv4. For example, in IPv4, the prefix length /29gives: 232 − 29 = 23 = 8 addresses.

Subnet masks[edit]

subnet mask is a bitmask that encodes the prefix length in quad-dotted notation: 32 bits, starting with a number of 1 bits equal to the prefix length, ending with 0 bits, and encoded in four-part dotted-decimal format: A subnet mask encodes the same information as a prefix length, but predates the advent of CIDR. In CIDR notation, the prefix bits are always contiguous. Subnet masks were allowed by RFC 950[5] to specify non-contiguous bits until RFC 4632[4]:Section 5.1 stated that the mask must be left contiguous. Given this constraint, a subnet mask and CIDR notation serve exactly the same function.

CIDR blocks[edit]

IP Address Match.svg

CIDR is principally a bitwise, prefix-based standard for the representation of IP addresses and their routing properties. It facilitates routing by allowing blocks of addresses to be grouped into single routing table entries. These groups, commonly called CIDR blocks, share an initial sequence of bits in the binary representation of their IP addresses. IPv4 CIDR blocks are identified using a syntax similar to that of IPv4 addresses: a dotted-decimal address, followed by a slash, then a number from 0 to 32, i.e., a.b.c.d/n. The dotted decimal portion is the IPv4 address. The number following the slash is the prefix length, the number of shared initial bits, counting from the most-significant bit of the address. When emphasizing only the size of a network, the address portion of the notation is usually omitted. Thus, a /20 block is a CIDR block with an unspecified 20-bit prefix.

An IP address is part of a CIDR block, and is said to match the CIDR prefix if the initial n bits of the address and the CIDR prefix are the same. An IPv4 address is 32 bits so an n-bit CIDR prefix leaves 32 − n bits unmatched, meaning that 232 − n IPv4 addresses match a given n-bit CIDR prefix. Shorter CIDR prefixes match more addresses, while longer prefixes match fewer. An address can match multiple CIDR prefixes of different lengths.

CIDR is also used for IPv6 addresses and the syntax semantic is identical. The prefix length can range from 0 to 128, due to the larger number of bits in the address. However, by convention a subnet on broadcast MAC layer networks always has 64-bit host identifiers. Larger prefixes are rarely used even on point-to-point links.

Assignment of CIDR blocks[edit]

The Internet Assigned Numbers Authority (IANA) issues to regional Internet registries (RIRs) large, short-prefix CIDR blocks. For example,, with over sixteen million addresses, is administered by RIPE NCC, the European RIR. The RIRs, each responsible for a single, large, geographic area, such as Europe or North America, subdivide these blocks and allocate subnets to local Internet registries (LIRs). Similar subdividing may be repeated several times at lower levels of delegation. End-user networks receive subnets sized according to the size of their network and projected short-term need. Networks served by a single ISP are encouraged by IETF recommendations to obtain IP address space directly from their ISP. Networks served by multiple ISPs, on the other hand, may obtain provider-independent address space directly from the appropriate RIR.

CIDR Address.svg

For example, in the late 1990s, the IP address (since reassigned) was used by An analysis of this address identified three CIDR prefixes., a large CIDR block containing over 2 million addresses, had been assigned by ARIN (the North American RIR) to MCI. Automation Research Systems, a Virginia VAR, leased an Internet connection from MCI and was assigned the block, capable of addressing just over 1000 devices. ARS used a /24 block for its publicly accessible servers, of which was one. All of these CIDR prefixes would be used, at different locations in the network. Outside MCI’s network, the prefix would be used to direct to MCI traffic bound not only for, but also for any of the roughly two million IP addresses with the same initial 11 bits. Within MCI’s network, would become visible, directing traffic to the leased line serving ARS. Only within the ARS corporate network would the prefix have been used.

IPv4 CIDR blocks[edit]

AddressformatDifferenceto last addressMaskAddressesRelativeto classA, B, CRestrictionson abc and d(0..255 unless noted)Typical use
a.b.c.d/32+ CHost route
a.b.c.d/31+ Cd = 0 … (2n) … 254Point to point links (RFC 3021)
a.b.c.d/30+ Cd = 0 … (4n) … 252Point to point links (glue network)
a.b.c.d/29+ Cd = 0 … (8n) … 248Smallest multi-host network
a.b.c.d/28+ Cd = 0 … (16n) … 240Small LAN
a.b.c.d/27+⅛ Cd = 0 … (32n) … 224
a.b.c.d/26+¼ Cd = 0, 64, 128, 192
a.b.c.d/25+½ Cd = 0, 128Large LAN
a.b.c.0/24+ C
a.b.c.0/23+ Cc = 0 … (2n) … 254
a.b.c.0/22+,0242104 Cc = 0 … (4n) … 252Small business
a.b.c.0/21+,0482118 Cc = 0 … (8n) … 248Small ISP/ large business
a.b.c.0/20+,09621216 Cc = 0 … (16n) … 240
a.b.c.0/19+,19221332 Cc = 0 … (32n) … 224ISP/ large business
a.b.c.0/18+,38421464 Cc = 0, 64, 128, 192
a.b.c.0/17+,768215128 Cc = 0, 128
a.b.0.0/16+,536216256 C = B
a.b.0.0/15+,0722172 Bb = 0 … (2n) … 254
a.b.0.0/14+,1442184 Bb = 0 … (4n) … 252
a.b.0.0/13+,2882198 Bb = 0 … (8n) … 248
a.b.0.0/12+,048,57622016 Bb = 0 … (16n) … 240
a.b.0.0/11+,097,15222132 Bb = 0 … (32n) … 224
a.b.0.0/10+,194,30422264 Bb = 0, 64, 128, 192
a.b.0.0/9+,388,608223128 Bb = 0, 128
a.0.0.0/8+,777,216224256 B = ALargest IANA block allocation
a.0.0.0/7+,554,4322252 Aa = 0 … (2n) … 254
a.0.0.0/6+,108,8642264 Aa = 0 … (4n) … 252
a.0.0.0/5+,217,7282278 Aa = 0 … (8n) … 248
a.0.0.0/4+,435,45622816 Aa = 0 … (16n) … 240
a.0.0.0/3+,870,91222932 Aa = 0 … (32n) … 224
a.0.0.0/2+,073,741,82423064 Aa = 0, 64, 128, 192
a.0.0.0/1+,147,483,648231128 Aa = 0, 128,294,967,296232256 A

In common usage, the first address in a subnet, all binary zero in the host identifier, is reserved for referring to the network itself, while the last address, all binary one in the host identifier, is used as a broadcast address for the network; this reduces the number of addresses available for hosts by 2. As a result, a /31 network, with one binary digit in the host identifier, is rarely used, as such a subnet would provide no available host addresses after this reduction. RFC 3021 creates an exception to the “host all ones” and “host all zeros” rules to make /31 networks usable for point-to-point links. In practice, however, point-to-point links are still typically implemented using /30 networks, with /31 preferred by some providers. /32 addresses must be accessed by explicit routing rules, as there is no room in such a network for a gateway (single-host network).

In routed subnets larger than /31 or /32, the number of available host addresses is usually reduced by two, namely the largest address, which is reserved as the broadcast address, and the smallest address, which identifies the network itself.[8][9]

IPv6 CIDR blocks[edit]

The large address size used in IPv6 permitted implementation of worldwide route summarization and guaranteed sufficient address pools at each site. The standard subnet size for IPv6 networks is a /64 block, which is required for the operation of stateless address autoconfiguration.[10] At first, the IETF recommended in RFC 3177 as a best practice that all end sites receive a /48 address allocation,[11] however, criticism and reevaluation of actual needs and practices has led to more flexible allocation recommendations in RFC 6177[12] suggesting a significantly smaller allocation for some sites, such as a /56 block for home networks. This IPv6 subnetting reference lists the sizes for IPv6 subnetworks. Different types of network links may require different subnet sizes.[13] The subnet mask separates the bits of the network identifier prefix from the bits of the interface identifier. Selecting a smaller prefix size results in fewer number of networks covered, but with more addresses within those networks.[14]

|||| |||| |||| |||| |||| |||| |||| ||||
|||| |||| |||| |||| |||| |||| |||| |||128     Single end-points and loopback
|||| |||| |||| |||| |||| |||| |||| |||127   Point-to-point links (inter-router)
|||| |||| |||| |||| |||| |||| |||| ||124
|||| |||| |||| |||| |||| |||| |||| |120
|||| |||| |||| |||| |||| |||| |||| 116
|||| |||| |||| |||| |||| |||| |||112
|||| |||| |||| |||| |||| |||| ||108
|||| |||| |||| |||| |||| |||| |104
|||| |||| |||| |||| |||| |||| 100
|||| |||| |||| |||| |||| |||96
|||| |||| |||| |||| |||| ||92
|||| |||| |||| |||| |||| |88
|||| |||| |||| |||| |||| 84
|||| |||| |||| |||| |||80
|||| |||| |||| |||| ||76
|||| |||| |||| |||| |72
|||| |||| |||| |||| 68
|||| |||| |||| |||64   Single LAN; default prefix size for SLAAC
|||| |||| |||| ||60   Some (very limited) 6rd deployments (/60 = 16 /64)
|||| |||| |||| |56   Minimal end sites assignment[12]; e.g. home network (/56 = 256 /64)
|||| |||| |||| 52   /52 block = 4096 /64 blocks
|||| |||| |||48   Typical assignment for larger sites (/48 = 65536 /64)
|||| |||| ||44
|||| |||| |40
|||| |||| 36   possible future local Internet registry (LIR) extra-small allocations
|||| |||32   LIR minimum allocations
|||| ||28   LIR medium allocations
|||| |24   LIR large allocations
|||| 20   LIR extra large allocations
||12   Regional Internet registry (RIR) allocations from IANA[15]

Prefix aggregation[edit]

CIDR provides fine-grained routing prefix aggregation. For example, sixteen contiguous /24 networks can be aggregated and advertised to a larger network as a single /20 routing table entry, if the first 20 bits of their network prefixes match. Two aligned contiguous /20 blocks may be aggregated as /19 network. This reduces the number of routes that have to be advertised.

Question: CIDR (Classless Inter-Domain Routing or supernetting)

CIDR (Classless Inter-Domain Routing, sometimes called supernetting) is a way to allow more flexible allocation of Internet Protocol (IP) addresses than was possible with the original system of IP address classes. As a result, the number of available Internet addresses was greatly increased, which along with widespread use of network address translation (NAT), has significantly extended the useful life of IPv4.DOWNLOAD THIS FREE GUIDE

Originally, IP addresses were assigned in four major address classes, A through D. Each of these classes allocates one portion of the 32-bit IP address format to identify a network gateway — the first 8 bits for class A, the first 16 for class B, and the first 24 for class C. The remainder identify hosts on that network — more than 16 million in class A, 65,535 in class B and 254 in class C. (Class D addresses identify multicast domains.)

To illustrate the problems with the class system, consider that one of the most commonly used classes was Class B. An organization that needed more than 254 host machines would often get a Class B license, even though it would have far fewer than 65,534 hosts. This resulted in most of the block of addresses allocated going unused. The inflexibility of the class system accelerated IPv4 address pool exhaustion. With IPv6, addresses grow to 128 bits, greatly expanding the number of possible addresses on the Internet. The transition to IPv6 is slow, however, so IPv4 address exhaustion continues to be a significant issue.

CIDR reduced the problem of wasted address space by providing a new and more flexible way to specify network addresses in routers. CIDR lets one routing table entry represent an aggregation of networks that exist in the forward path that don’t need to be specified on that particular gateway. This is much like how the public telephone system uses area codes to channel calls toward a certain part of the network. This aggregation of networks in a single address is sometimes referred to as a supernet.

Using CIDR, each IP address has a network prefix that identifies either one or several network gateways. The length of the network prefix in IPv4 CIDR is also specified as part of the IP address and varies depending on the number of bits needed, rather than any arbitrary class assignment structure. A destination IP address or route that describes many possible destinations has a shorter prefix and is said to be less specific. A longer prefix describes a destination gateway more specifically. Routers are required to use the most specific, or longest, network prefix in the routing table when forwarding packets. (In IPv6, a CIDR block always gets 64 bits for specifying network addresses.)

A CIDR network address looks like this under IPv4:

The “” is the network address itself and the “18” says that the first 18 bits are the network part of the address, leaving the last 14 bits for specific host addresses.

Classless Inter-Domain Routing, CIDR
An example of CIDR

CIDR is now the routing system used by virtually all gateway routers on the Internet’s backbone network. The Internet’s regulating authorities expect every Internet service provider (ISP) to use it for routing. CIDR is supported by the Border Gateway Protocol, the prevailing exterior (interdomain) gateway protocol and by the OSPF interior (or intradomain) gateway protocol. Older gateway protocols like Exterior Gateway Protocol and Routing Information Protocol do not support CIDR.

What is a VLAN?

Before implementing or managing a VLAN, one must understand what a VLAN is. A VLAN, or virtual (logical) LAN, is a local area network with definitions that map workstations based on anything except geographic location. For example, a VLAN might have a definition that maps workstations by department, type of user, and so on. The benefits of VLANs include easier management of workstations, load balancing, bandwidth allocation and tighter security.

VLANs become even more efficient when coupled with server virtualization. In a virtualized data center environment, the VLAN strings the physical servers together and creates a route. By allowing virtual machines to move across physical servers in the same VLAN, administrators can keep tabs on the virtual machines and manage them more efficiently.

VLAN resources

  • VLANs versus IP subnets: Why use a VLAN over IP subnetting? A virtual local area network (VLAN) is more secure than IP subnetting. This expert answer examines both VLANs and IP subnets and the key differences between them.
  • For more VLAN basics, check out this tutorial.

 What’s the best way to configure a VLAN?

There are three ways of configuring a VLAN: static, dynamic and port-centric. The configuration will be based on the needs of the VLAN. For example, for more security, administrators opt for a static VLAN, which assigns the VLAN membership to a switch’s port; whereas a dynamic VLAN assigns the VLAN membership to the MAC address of the host or device.

VLAN guide: Your VLAN configuration questions answered

Understanding VLAN implementation and IP address assignment

How can I configure 10 VLANs with five unmanaged switches?

How to set up VLAN configuration on multiple switches

How does IPv6 subnetting work in LAN and VLAN network design?

Router Expert: Building VLAN interfaces in Linux and IOS

Is there VLAN software recommended for Realtek NICs?

What VLAN management software supports multiple vendor platforms?

Inter-VLAN routing with a LAN and WAN on a single router

How can I use VLANs and NAT to get around the need for a static IP address?

Switches that support VLANs establish the VLAN by either frame-tagging or filtering, both of which look at the frame and decide where it should be sent. Frame tagging “tags” a frame to keep track of it as it travels through the switch’s fabric. Frame filtering examines specific information from each frame through a filtering table that is developed for the switch, allowing for examination of many different frame attributes. But frame filtering is less scalable than frame tagging because each frame needs to be referenced to a filter table. Frame tagging is considered the most efficient way to go, according to IEEE 802.1q.

VLAN configuration resources

  • Get more information on configuring VLANs in this step-by-step screencast from David Davis. You’ll learn how to configure your routers and switches, set up and assign the trunk ports, and perform the necessary tests to get traffic moving across your VLAN successfully.
  • Chapter 3 of CCNA Self-Study by Steve McQuerry explores how VLANs control broadcasts in your network to provide more efficiency and extending switched networks with VLANs.

 Configuring VLANs for server virtualization

VLANs and server virtualization go hand in hand. Server virtualization allows organizations to reduce the number of physical servers in the data center and provide scalable flexibility to other business needs, such as business applications. VLANs are integral to a virtualized environment thanks to the mobile nature of virtual machines.

The associated layer 2 services must be supported by the network mechanisms, and the best way to do this is to limit virtual machine mobility to physical servers in the same VLAN. By placing many servers in a single VLAN and limiting the virtual machines to that VLAN, there is easy migration of virtual machines between the physical servers. Not only do VLANs enable free movement of the virtual machines, they also allow network administrators to track them, keeping them secure and easily manageable.

One of the major challenges in server virtualization is in data center network configuration, and this includes VLANs. Many server virtualization products include extensive support for 802.1q VLAN tags. Data center network admins must configure VLANs on the switches to interact with the physical servers running the server virtualization software. Improperly configured VLAN settings can cause connectivity disruption for any workload running on that physical server.

VLANs and server virtualization resources

 VLAN and the wireless LAN (WLAN)

VLAN trunking can be applied to wireless networking to help prioritize traffic. VLAN access points (APs) can be set up to work as multiple virtual WLAN infrastructures, using VLANs for varying levels of security — some for low-security guest Internet access, others for minimal-security enterprise users, a high-security VLAN for administrators, and so on. Using VLAN 802.1q tags, a network administrator can map wireless traffic to multiple VLANs and assign priority.

How exactly does this work? Wireless AP traffic is concentrated through an 802.1q-capable wireless switch or gateway; the device tags the packets before forwarding them. Through appropriate tagging, the packets move onto roles defined by the tags, whether the role be guest or employee.

With administrative traffic kept isolated from end-user traffic, network administrators can breathe a little easier, knowing that wireless data is being routed properly thanks to VLAN tagging.

VLAN and WLAN resources

  • Learn different methods of creating VLANs on a WLAN in this expert response from Lisa Phifer.
  • This tip describes how to use these same VLAN capabilities, found in both wired and wireless devices, to tag and compartmentalize Wi-Fi traffic, supporting your company’s security and traffic management policies.
  • 802.1X/EAP makes it possible to authenticate individual wireless users. But 802.1X can also be used to funnel wireless traffic onto VLANs, enforcing user or group-based permissions. This tip explains how to use RADIUS attributes returned by 802.1X to supply VLAN tags, establishing that critical link between authentication and authorization.
  • Many business networks rely on VLANs to partition Ethernets and control the destinations reached by each accessing user. Enterprise users shift between Ethernet and Wi-Fi throughout the workday, so it makes sense to apply VLANs to both wired and wireless network access. This tip describes best practices for mapping Wi-Fi stations onto corporate VLANs.
  • Get tips and advice on implementing a split VLAN wireless structure with an authenticated access and Internet only access.

 Troubleshooting VLANs

Even after VLANs have been implemented properly and efficiently, it is inevitable that network administrators and managers will run into problems. Troubleshooting VLANs is not quite as simple as troubleshooting a traditional network. It’s relatively easy to tell if a network device isn’t performing. But in a switched network with virtual trunks and paths, it’s not always easy to tell what’s making a network run slowly. Plugging in that protocol analyzer isn’t going to cut it for troubleshooting VLANs, and the resources below will help you monitor the problems.

VLAN troubleshooting resources

  • When troubleshooting a virtual LAN (VLAN), learn how to monitor 802.1q tagged traffic within a network in this advice from our routing and switching expert.
  • VLANs are popular targets for attacks. Learn how to secure a VLAN from popular attacks such as the VLAN Hopping attack and Address Resolution Protocol attack.

Understanding VLAN implementation and IP address assignment:

How do you configure VLANs in a domain environment? How will clients get their IPs, and is it possible to communicate across different VLANs in this environment? What will the configuration requirement be on the DHCP server?

To answer the first question, the answer is always: it depends. Configuring VLANs is fairly straightforward based on the platform chosen in the switching environment to support VLAN administration. Unfortunately the configuration is actually the easiest part, the hard part is designing an adequate VLAN design.

Once you have that, configuration is clearly documented in the switch manual. To get a good VLAN design, it is imperative that you understand the network, application distribution model, and user access methods.

Clients do not get their IP’s based on the VLAN assignment model. That is traditionally a question that is handled by either deploying a DHCP server (generally used for client machines) or statically assigning IP addresses (generally used for DNS management of servers). Often times this is more of a political decision (got to love the layer 8 issues) than a technical one; but a fairly good design model is to use DHCP servers for client workstations, set up into DHCP zones and to use DNS and manual IP assignment for servers.

The configuration requirements for DHCP servers traditionally depend on the size of your network and how you logically want to break up these networks. Most notably, I see DHCP zones logically set up by geographic region. This configuration tends to provide better stability over routing and switched environments and, coupled with the proper VLAN configuration, makes localized traffic more local which is always a good thing.

Wrong subnet mask effect on a host:

I was reading chapter 15 of the Odom ICND1 book which is about troubleshooting IP Routing. One of the points brought up is that all computers in the same LAN have to have the same network subnet mask. Which I understand. My question for CLN is what kind of issues would arise on the client side with a incorrectly configured network mask.

For example, let’s say we have PC1 and PC2.

PC 1 has an IP of /24

PC2 has an IP of /25

The Gateway is /24. Both clients are configured with everything correct except the subnet mask. Would PC1 be able to ping PC2? Or would the replies “get lost” because PC2 believes that PC1 is on a different subnet and therefore would send the replies to the gateway? What other sorts of issues would PC2 have?

As I read the section in the chapter I was trying to image all the issues the PC2 would run into. I will test this at some point in the near future with my lab at home, but I was just curious if anyone could respond with a more complete picture as what kind of issues PC2 would have.

Thanks in advance to all the great people here on CLN.


You are correct, the packet would get lost because PC2 believes PC1 is in a different subnet, so it would send the reply to the default gateway, and the router would presumably not have a route to  If PC2’s IP address were in the subnet, it would believe that PC1 is in the same subnet, and so the ping would work, even though the mask is wrong.


Your question really hits home with the fundamentals of how Ethernet and IP coexist. In the CCENT, they give you reasons for how it should work but don’t tell results of what happens when not all the configurations are correct on the same subnet.

Here are the configurations:

PC1 – /24

PC2 – /25

Default Gateway – /24

Would PC1 be able to ping PC2? The answer: yes it can. The reasoning: the default gateway doesn’t know that PC2 has the wrong subnet mask (assumptions are made).

Order of operations:

PC2 sends ping to PC1. First an ARP request is needed to the default gateway because PC1 is on a different subnet.

ARP packet is broadcasted and the default gateway replies with its MAC address.

PC2 sends ICMP packet destined for PC1 to the default gateway because it believes its on a different network.

Default gateway receives ICMP packet and processes it. Typically routers are used to forward information off the subnet but because the information is incorrect on PC2 it will still need to go through it’s destination lookup process. The router finds that PC1 is connected via the /24 and will begin forwarding.

The default gateway will ARP to PC1 for it’s MAC address.

PC1 replies with its MAC address to the default gateway.

The default gateway forwards the packet to PC1.

PC1 processes the packet and needs to reply. It checks the destination address against it’s subnet mask through ANDing and finds that the destination is on the same subnet. Remember, PC1 doesn’t know PC2 has incorrect information. It’s acting according to the rules of subnetting.

PC1 ARPs to PC2 for its MAC address. This is a layer 2 operation.

PC2 replies with its unicast frame back to PC1 since it’s all handled in Layer 2. No need for the default gateway in step.

PC1 receives reply from PC2 and sends ICMP packet directly to PC2.

PC2 receives ICMP packet and a reply is presented to the user as successful.

So what’s wrong with this picture? Well 2 out of the 3 devices have accurate information and can make accurate decisions. The only inaccurate device is PC2 because it has inaccurate information.

This particular example is a “pretty” example because all devices are on the same VLAN with similar settings. The default gateway (acting accordingly) can “correct” the PC2’s intentions but this is only because PC2’s subnet falls ‘inside’ PC1+Gateway’s subnet. If it didn’t fall inside the VLAN’s proper subnet, this whole operation wouldn’t work.

For example: VLAN 1 has /24. Gateway:, PC1:, PC2: PC2 will not be able to communicate with router or PC1 because it sits far off the correct subnet and the devices will simply drop the packet.

Hope that clarified things for you.


Thank you for the reply. What other sorts of issues would you see? If PC2 opened up a browser and tried to get out to the internet would it be able to? Or would the replies from the webserver get lost on their way back because the router doesn’t have a route to that subnet?


That reply is a great step-by-step on how the devices and associated protocols would do their job. The interesting thing I took away from it was how PC2 would still get a reply even thought it was not configured properly. I figured that the PC1 echo replies would get lost because it would forward them back to the gateway and the gateway doesn’t have a route to that subnet. I want to actually setup two computers in my lab and test it out this weekend just for shizz and giggles. Thanks all for the great responses.


No – it work because routers route based on networks, not host. Therefore, routers would advertise networks with subnets through routing updates and not individual host information. Devices outside the /24 network have no idea and are not programmed to care what is inside of it and how to forward from the router to the host. Once inside the network, it is up to the layer 2 devices to properly forward information to correct destination.

Again, this the success of this example is all because the /25 network sits inside the /24 network. If it didn’t, everything would fail when it comes to communication with PC2.

Now, I will say this: Networks can have more than one subnet. This is not common because traditionally networks have one subnet to make things easy. If you have multiple subnets on the same physical hardware, you would create VLANs to seperate the subnets from communicate together and use a Layer 3 device to facilitate communication between them. That’s the right way to do things but that doesn’t mean you can’t do it the other way with 2 or more subnets located on the same VLAN.

For instance, if you have PC1 and PC2 on /24 network and you have PC3 and PC4 on the /24 network, you can attach them all to the same switch and PC1 will communicate with PC2 and PC3 will communicate with PC4 because the rules are all correct. But, PC1/PC2 can’t communicate with PC3/PC4 because they are located on different subnets and require routing.

If this is confusing, it’s because the CCENT doesn’t talk about the effects of incorrect configurations. They just tell you the right way and you build your knowledge with that information and move on. This is done so it  doesn’t overwhelm you.


The reason why PC1 doesn’t “lose” the packet is because it perceives that is on the /24 subnet. It doesn’t know about the /25 subnet, only PC2 knows about this.


Gotcha. So PC1 receives the ping and sees the source IP and says to itself, this computer is in my LAN so let me send the reply to it once I figure out what the MAC address is. So PC1 does the ARP request and PC2 replies saying, hey thats me  and the reply would go to the default gateway because PC2 looks at the source IP of PC1 and says to itself, this computer is on another subnet. So the reply by PC2 gets forwarded to the default gateway which then forwards it to PC1. Is that correct?


You are mostly correct until PC2 ARP reply. ARP is a layer 2 protocol and is never routed. The only reason why things are sent to the default gateway is because they will be routed. ARP is simply switched from PC2 directly back to PC1. No default gateway is needed.

The only reason why PC2 originally ARP’d to the default gateway is because after the ANDing process was  completed, the results showed that PC1 would be on a different subnet. This is the error of PC2. and it’s all because of a simple /25 error.

Communication from PC2 would go: PC2->Gateway->PC1 for ICMP packets

Communication from PC1 would go: PC1->PC2 for ICMP for ICMP packets.

ICMP is routable, ARP is not. Remember that the default gateway and PC1 are working correclty based on the rules of IP routing and Ethernet switching. Keep the two ideas of routing and switching seperate. Ethernet is one of many layer 2 intra-network forwarding technologies that IP uses to forward information. The idea of ARP is simply a Layer 2 to layer 3 translation mechanism that is only used with Ethernet. Therefore ARP will never need to the default gateway for host in the same VLAN unless there is a problem with configuration. In this case, there is a problem because /25 needs to be /24. That’s why PC2 ARPs the default gateway for the initial start but it’s response can be directly back to the sender in a /24 network.

If you still have questions let me know.


I think the problem I’m having understanding is that I went over ARP many chapters ago. I’m about done with the book and once I finish I’m going to review all of my notes and will go over ARP again. Or maybe I will crack the book open during my lunch hour and review it sooner. Thank you for your responses. Very helpful and informative.


I’m SUPER late to this conversation but because it came up in a google search while I was poking around about subnet masks complications, I wanted to weigh in in case others see this.

I don’t think the steps shown for what would happen here are accurate.  Right out of the gate, yes, PC2 would ARP out for its gateway but that would be the end of it.  The 2nd step listed (ARP packet is broadcasted and the default gateway replies with its MAC address.) says it all.  The broadcast for PC 2 would be limited to (.129 – .254), so it would not even be able to communicate with a default gateway of  That gateway IP may as well be because one way or another 1.1 is NOT in the same subnet as so it cannot be used to route that machine’s packets to a different subnet.


I haven’t labbed this up, so could be wrong – but I think this works because the ARP broadcast from PC2 is actually a L2 broadcast i.e. MAC address FF:FF:FF:FF:FF:FF looking for host IP; because PC2 is on the same wire as the gateway, the gateway would respond with its MAC address.


PC2 wouldn’t even accept the configuration of a default gateway outside of its configured subnet for this very reason.

In theory if an ARP request was sent out the gateway would reply however PC2 would drop the frame as it belongs to a different subnet.

Try setting up a secondary subnet on the same VLAN and you should find your answer.

EDIT: RFC 1009 is a good document to read up on if you have the time. More specifically it begins to introduce this concept on page 4 –


Good point about whether you could actually enter the “invalid” DG address. But what if the DG for was actually – this would be valid on both and


OK, so grabbed a Windows 7 box –

You *can* set ip address SN GW – Windows throws a warning but allows the config to be saved.

PC1 set, DG, PC2 set, DG – PC2 can ping PC1, PC1 can ping PC2 – not surprising, since the two addresses are both within both SN masks iyswim.

PC3 set, DG – PC1 can ping PC3 (of course), PC2 can’t…

[Unforeseen side effect – my internet connection on the router was knocked out by some of the misconfig on the n/w]

REVISE: msdos VS powershell VS cmd VS windows script host VS powershell core VS

Windows Script Host:

The Microsoft Windows Script Host (WSH) (formerly named Windows Scripting Host) is an automation technology for Microsoft Windows operating systems that provides scripting abilities comparable to batch files, but with a wider range of supported features. This tool was first provided on Windows 95 after Build 950a on the installation discs as an optional installation configurable and installable by means of the Control Panel, and then a standard component of Windows 98 (Build 1111) and subsequent and Windows NT 4.0 Build 1381 and by means of Service Pack 4. The WSH is also a means of automation for Internet Explorer via the installed WSH engines from IE Version 3.0 onwards; at this time VBScript became means of automation for Microsoft Outlook 97.[1] The WSH is also an optional install provided with a VBScript and JScript engine for Windows CE 3.0 and following and some third-party engines including Rexx and other forms of Basic are also available.[2] [3] [4]

It is language-independent in that it can make use of different Active Scripting language engines. By default, it interprets and runs plain-text JScript (.JS and .JSE files) and VBScript (.VBS and .VBE files).

Users can install different scripting engines to enable them to script in other languages, for instance PerlScript. The language independent filename extension WSF can also be used. The advantage of the Windows Script File (.WSF) is that it allows the user to use a combination of scripting languages within a single file.

WSH engines include various implementations for the RexxBASICPerlRubyTclPHPJavaScriptDelphiPythonXSLT, and other languages.

Windows Script Host is distributed and installed by default on Windows 98 and later versions of Windows. It is also installed if Internet Explorer 5 (or a later version) is installed. Beginning with Windows 2000, the Windows Script Host became available for use with user login scripts.


Windows Script Host may be used for a variety of purposes, including logon scripts, administration and general automation. Microsoft describes it as an administration tool.[5] WSH provides an environment for scripts to run – it invokes the appropriate script engine and provides a set of services and objects for the script to work with.[5] These scripts may be run in either GUI mode (WScript.exe) or command line mode (CScript.exe) offering flexibility to the user for interactive or non-interactive scripts.[6] Windows Management Instrumentation is also scriptable by this means.

The WSH, the engines, and related functionality are also listed as objects which can be accessed and scripted and queried by means of the VBA and Visual Studio object explorers and those for similar tools like the various Script Debuggers and Editors. Conversely, VBA is a third WSH engine installed by default.

WSH implements an object model which exposes a set of Component Object Model (COM) interfaces.[7] So in addition to ASP, IIS, Internet Explorer, CScript and WScript, the WSH can be used to automate and communicate with any Windows application with COM and other exposed objects, such as using PerlScript to query Microsoft Access by various means including various ODBCengines and SQL, ooRexxScript to create what are in effect Rexx macros in Excel, Quattro Pro, Microsoft WordLotus Notes and any of the like, the XLNT script to get environment variables and print them in a new TextPad document, The VBA functionality of Microsoft Office, Open Office(as well as Python and other installable macro languages) and Corel WordPerfect Office is separate from WSH engines although Outlook 97 uses VBScript rather than VBA as its macro language.[8]

Python in the form of ActiveState PythonScript can be used to automate and query the data in SecureCRT, as with other languages with installed engines, e.g. PerlScriptooRexxScriptPHPScriptRubyScriptLuaScriptXLNT and so on. One notable exception is Paint Shop Pro, which can be automated in Python by means of a macro interpreter within the PSP programme itself rather than using the PythonScript WSH engine or an external Python implementation such as Python interpreters supplied with Unix emulation and integration software suites or other standalone Python implementations et al.[9][10] as an intermediate and indeed can be programmed like this even in the absence of any third-party Python installation; the same goes for the Rexx-programmable terminal emulator Passport.[11] The SecureCRT terminal emulator, SecureFX FTP client, and related client and server programmes from Van Dyke are as of the current versions automated by means of the WSH so any language with an installed engine may be used; the software comes with VBScript, JScript, and PerlScript examples.

As of the most recent releases and going back a number of versions now, the programmability of Take Command and 4NT in the latest implementations (by means of “@REXX” and similar for Perl, Python, Tcl, Ruby, Lua, VBScript, JScript and the like and so on) generally uses the WSH engine.[12] The ZOC terminal emulator gets its ability to be programmed in Rexx by means of an external interpreter, one of which is supplied with the programme, and alternate Rexx interpreters can be specified in the configuration of the programme.[13][14] The MKS Toolkit provides PScript, a WSH engine in addition to the standard Perl intepreter perl.exe which comes with the package.

VBScript, JScript, and some third-party engines have the ability to create and execute scripts in an encoded format which prevents editing with a text editor; the file extensions for these encoded scripts is .vbe and .jse and others of that type.

Unless otherwise specified, any WSH scripting engine can be used with the various Windows server software packages to provide CGI scripting. The current versions of the default WSH engines and all or most of the third party engines have socket abilities as well; as a CGI script or otherwise, PerlScript is the choice of many programmers for this purpose and the VBScript and various Rexx-based engines are also rated as sufficiently powerful in connectivity and text-processing abilities to also be useful. This also goes for file access and processing—the earliest WSH engines for VBScript and JScript do not since the base language did not,[15] whilst PerlScript, ooRexxScript, and the others have this from the beginning.

WinWrap BasicSaxBasic and others are similar to Visual Basic for Applications, These tools are used to add scripting and macro abilities to software being developed and can be found in earlier versions of Host Explorer for example. Many other languages can also be used in this fashion. Other languages used for scripting of programmes include Rexx, Tcl, Perl, Python, Ruby, and others which come with methods to control objects in the operating system and the spreadsheet and database programmes.[16] One exception is that the Zoc terminal emulator is controlled by a Rexx interpreter supplied with the package or another interpreter specified by the user; this is also the case with the Passport emulator.

VBScript is the macro language in Microsoft Outlook 97, whilst WordBasic is used for Word up to 6, Powerpoint and other tools. Excel to 5.0 uses Visual Basic 5.0. In Office 2000 forward, true Visual Basic for Applications 6.0 is used for all components. Other components use Visual Basic for ApplicationsOpenOffice uses Visual Basic, Python, and several others as macro languages and others can be added. LotusScript is very closely related to VBA and used for Lotus Notes and Lotus SmartSuite, which includes Lotus Word Pro (the current descendent of Ami Pro), Lotus ApproachLotus FastSiteLotus 1·2·3, &c and pure VBA licensed from Microsoft is used Corel products such as WordPerfectParadoxQuattro Pro &c.

Any scripting language installed under Windows can be accessed by external means of PerlScript, PythonScript, VBScript and the other engines available can be used to access databases (Lotus NotesMicrosoft AccessOracleParadox) and spreadsheets (Microsoft ExcelLotus 1·2·3Quattro Pro) and other tools like word processors, terminal emulators, command shells and so on. This can be accomplished by means of the WSH, so any language can be used if there is an installed engine.

In recent versions of the Take Command enhanced command prompt and tools, the “script” command typed at the shell prompt will produce a list of the currently installed engines, one to a line and therefore CR-LF delimited.[17][18][19]


The first example is very simple; it shows some VBScript which uses the root WSH COM object “WScript” to display a message with an ‘OK’ button. Upon launching this script the CScript or WScript engine would be called and the runtime environment provided.

Content of a file hello0.vbs

WScript.Echo "Hello world"

WSH programming can also use the JScript language.

Content of a file hello1.js

WSH.Echo("Hello world");

Or, code can be mixed in one WSF file, such as VBScript and JScript, or any other:

Content of a file hello2.wsf


  MsgBox "hello world (from vb)"

  WSH.echo("hello world (from js)");


Security concerns[edit]

Windows applications and processes may be automated using a script in Windows Script Host. Viruses and malware could be written to exploit this ability. Thus, some suggest disabling it for security reasons.[20] Alternatively, antivirus programs may offer features to control .vbs and other scripts which run in the WSH environment.

Since version 5.6 of WSH, scripts can be digitally signed programmatically using the Scripting.Signer object in a script itself, provided a valid certificate is present on the system. Alternatively, the signcode tool from the Platform SDK, which has been extended to support WSH filetypes, may be used at the command line.[21]

By using Software Restriction Policies introduced with Windows XP, a system may be configured to execute only those scripts which are stored in trusted locations, have a known MD5 hash, or have been digitally signed by a trusted publisher, thus preventing the execution of untrusted scripts.[22]

Available scripting engines[edit]

Note: By definition, all of these scripting engines can be utilised in CGI programming under Windows with any number of programmes and set up, meaning that the source code files for a scipt used on a server for CGI purposes could bear other file extensions such as .cgi and so on. The aforementioned ability of the Windows Script Host to run a script with multiple languages in it in files with a .wsh extension. Extended Html and XML also add to the additional possibilities when working with scripts for network use, as do Active Server Pages and so forth. Moreover, Windows shell scripts and scripts written in shells with enhanced capabilities like TCC4NT etc. and Unix shells under interoperability software like the MKS Toolkit can have scripts embedded in them as well.

Engine NameScripting Language ImplementedBase LanguageFile extensionsAvailabilityProduced ByStatusInitial Release DateEncoded ScriptsNotes
VBScriptMicrosoft VBScriptMicrosoft Visual Basic 6.0.vbsInstalled by defaultMicrosoftdefault install1999Yes, .vbe
JScriptMicrosoft JScriptMicrosoft Visual J++ 6.0.jsInstalled by defaultMicrosoftdefault install1999Yes, .jse
VBAMicrosoft Visual Basic for ApplicationsMicrosoft Visual Basic 6.0.mod, .bas, .frm, otherVBA is a standard feature of Microsoft Office products[23]MicrosoftVBA is a standard feature of Microsoft Office products1999Unknown
JScript .NET WSH EngineMicrosoft JScriptMicrosoft J++.NET.js*.NET Framework ComponentMicrosoftWith various tools, .NET Framework2003YesMay require manual install/config
VB.NET WSH EngineMicrosoft VB.NETMicrosoft Visual Basic.NET.vb*.NET Framework ComponentMicrosoftWith various tools, .NET Framework2003YesMay require manual install/config
WinWrap BasicWinWrap BasicBasic.wwbIn the main WWB installationPolar EngineeringStandard functionality of WWB; Utilises both .NET and COM2004Yes
PerlScriptPerlPerl 5.plswith ActiveState PerlActiveStateOpen source1999Reportedly yes
PScriptPerlPerl 5, CGI functionality.p, .pswith MKS ToolkitMKSCommercial2001
XBScriptxBase Scripting EnginexBase (Clipper).xbs, .prgClipperwith XBScript sofrwareCommercial
LotusScriptWSHLotusScriptMicrosoft Visual Basic (q.v.).nsfThird party downloadService Desk PlusFreeware2001
RexxScriptRexxRexx.rxs, .rx, .rexWith some Rexx implementationsVariousFreeware1998
ooRexxScriptOpen Object REXXREXX.rxswith Open Object Rexx or free from some third partiesOpen Object Rexx teamOpen source
PythonScriptPythonPython.pysSourceForge & with ActivePythonThe Pywin32 projectOpen source
TclScriptTcl/TkTcl/Tk.tclsSourceForgeActiveState or third partyOpen source
ActivePHPScriptPHPPHP.phpswith PHPPHP teamOpen source
PHPScriptPHPPHP.phpswith PHPPHP teamOpen sourceEarlier version of ActivePHPScript
RubyScriptRubyRuby.rbswith Ruby distributionRuby teamOpen sourceYes
XLNTScriptXLNTDCL.xcswith XLNTAdvanced Systems Concepts, Inc.Commercial1997An OpenVMSDCL-based multi-purpose scripting application for Windows
LuaScriptLuaLua.luawith LuaLua organisationOpen Source
Object REXXengineObject REXXRexx.rex, .rxswith IBM Object REXXIBMCommercial2002
XML EngineXML parsingExtended HTML, XML.xmlwith many XML implementationsElf Datade facto Default install2000Macintosh too
Kixtart WSH EngineKixtartKixTart, MS-DOS, Windows 95. Windows NT shells.kixwith KixStartMicrosoftNetherlandsWindows Resource Kits and other resources1996Download from Microsoft or elsewhere, aka KixStart32
NullScriptNullScriptNull language.nswith NullScriptNullScript OrganisationWindows Resource Kits and other resources1999
ForthScriptForthForth.fth, othersForthDMOZOpen Source
Haskell ScriptHaskellHaskell*.hsk (provisional), othersfree downloadOpen Source
XSLT WSH EngineXSLTXSLT.xsltfree downloadOpen Source
CobolScript WSH EngineCobolCobol.cbl. .cob, .cbFujitsu Cobol 3 — free for educational useCommercialware from Fujitsu free with free compiler for educators &cProprietary
Delphi scripting engineDelphiDelphi, a Pascal variant.dlp, .del, .In some Delphi distributions or resource kitsCommercial2003
DMDScriptDMDScriptD, a major incrementation of C.dmdDMD Distributions, downloadFreewareAvailable on Web2014DMD
C# ScriptC#Microsoft C#.NET.cs. .c#, othersSource code availableOpen Source, active development underwayunclear2013
Small C Scripting EngineCC (K&R, Ansi).c, othersVarious locations, check WebFreeware2009
JavaScript WSH EngineJavaScript/JavaJava &, .j, jva, othersWith many JavaScript implementationsSun/Other Java OrganisationsFreeware
Take Command WSH Engine4NT/Take CommandTCC, the current version of 4NT p.btm, .cmd, bat, othersCheck JP SoftwareJP SoftwareProprietary2015Early development
92Script WSH EngineTI-89/92+/Voyager 200 TI-BasicCalculator TI-Basic.92bsProject Web/FTP siteVarious independent programmersExperimental, Open Source2014“possible”Beta Q4 2015 for main engine; graphing functionality (92Script/Tk) then or later
48Script WSH EngineHP-48 Calculator family on-board programming languageHP 48 Programming Language, distant relative of Forth, Basic, Lisp.48sProject Web/FTP siteVarious independent programmersExperimental2015PlannedStatus as of 2015-09-30. Language has Lisp, Basic, Forth, and other influences.
Fortran ScriptFortranFortran 77.for, .ftn. f77, f90, f95VariousVariousExperimental proof-of-concept, academic exercise, shareware, commercial, open source.2000
PascalScriptObject PascalPascal 7.pas, .ops, otherObject PascalRemObjectsFreeware2001Can also be used with Delphi directly
Lisp WSH EngineLispLisp.lisp, .lspVarious Lisp toolsAutoLisp and othersFreeware or Shareware
BESENECMA-JavaScriptJava and Variants.bes, .bsn, othersSourceForgeBESEN OrganisationOpen Source2011
ECMAScript WSH enginesJava and VariantsVariousVariousVariousVariousExperimental, Freeware, Open Source, Shareware, Proprietary, Commercialware2005There are numerous ECMAScript implementations but not all have WSH engines
CFXScript WSH EngineCasio CFX-9850 and fx Calculator series on-board programming languageCasio Calculator Programming Language, as ported to various operating systems as CFW.cfxbProject Web/FTP Sitesindependent programmersExperimental2015Planned[24]Status as of 2015-09-30. Language has elements of Basic, Forth, Fortran, and others.
SharpCalcScript WSH EngineSharp graphing calculators on-board programming languageSharp S-Basic as ported to windows as NeusSFortran.scsbProject Web/FTP Sitesindependent programmersExperimental2015PlannedStatus as of 2015-09-30. Also subsumes the S-Basic language of Sharp’s Pocket Computers.

There have been suggestions of creating engines for other languages, such as LotusScriptSaxBasicBasicScriptKiXtartawkbashcsh and other Unix shells, 4NTcmd.exe (the Windows NT shell), Windows PowerShellDCLCC++Fortran and others.[25] The XLNT language[26] is based on DCL and provides a very large subset of the language along with additional commands and statements and the software can be used in three ways: the WSH engine (*.xcs), the console interpreter (*.xlnt) and as a server and client side CGI engine (*.xgi).[27]

When a server implementing CGI such as the Windows Internet Information Server, ports of Apache and others, all or most of the engines can be used; the most commonly used are VBScript, JScript, PythonScript, PerlScript, ActivePHPScript, and ooRexxScript. The MKS Toolkit PScript programme also runs Perl. Command shells like cmd.exe, 4NT, ksh, and scripting languages with string processing and preferably socket functionality are also able to be used for CGI scripting; compiled languages like C++, Visual Basic, and Java can also be used like this. All Perl interpreters, ooRexx, PHP, and more recent versions of VBScript and JScript can use sockets for TCP/IP and usually UDP and other protocols for this.

Version history[edit]

Windows versionShipped with WSH versionLast redistributable version
Windows 95None (separate redistributable)5.6
Windows NT 4.0None (separate redistributable)5.6
Windows NT Server 4.0None (separate redistributable)5.6
Windows CE 3.01.0 (optional install on installer disc)2.0
Windows 981.05.6
Windows 98 Second Edition1.05.6
Windows 20002.0 (also termed WSH 5.1)5.7
Windows 2000 Server2.0 (also termed WSH 5.1)5.7
Windows 2000 SP3, SP4 and SP55.65.7
Windows Me2.0 (also termed WSH 5.1)5.6
Windows XP5.65.7
Windows XP SP35.7Not applicable
Windows Server 20035.65.7
Windows Vista5.7Not applicable
Windows Server 20085.7Not applicable
Windows 75.8Not applicable
Windows Server 2008 R25.8Not applicable
Windows 85.8Not applicable
Windows Server 20125.8Not applicable
Windows 105.812Not applicable
Windows Server 20165.812Not applicable

The redistributable version of WSH version 5.6 can be installed on Windows 95/98/Me and Windows NT 4.0/2000. WSH 5.7 is downloadable for Windows 2000, Windows XP and Windows Server 2003. Recently[when?], redistributable versions for older operating systems (Windows 9x and Windows NT 4.0) are no longer available from the Microsoft Download Center.

Since Windows XP Service Pack 3, release 5.7 is not needed as it is included, with newer revisions being included in newer versions of Windows since.

What is the difference between CMD and Powershell?

On the first look PowerShell is pretty similar to cmd. They are both used to run external programs like ping or copy, and give you way to automate tasks by writing a script/batch file. 

But PowerShell is a lot more than that. 

First of all it provides a very rich set of commands (calleds cmdlets) that integrate deeply with windows and most of Microsoft products. Cmdlets like Get-Process which lists active processes.

Another big difference is that the output of this command is not just text, it’s collection of objects. This is way superior to outputting just text, because you can easily query any property of the object, like its name or memory consumption. In cmd you’d have to parse the output.

Another great thing about PowerShell is its integration with .NET. You can easily use all the classes defined in .NET to program any functionality cmdlets are missing.

You can also integrate the powershell runtime into .NET applications with ease and consume Output PowerShell obkects directly.

All in all, PowerShell is cmd on steroids that let’a you automate and manage Windows more easily.


The simple answer is that PowerShell is object-oriented, and cmd.exe is string-based.  But that’s glossing over the “why is this not just different, but BETTER?”, which is the underlying question.

PSH can create .NET objects on the shell and allow the user to interact with them.  

I can, interactively, open an XML file, browse into it using DOM (ParentNode.ChildNode.ChildNode.Attribute), using XPath (SelectSingleNode(“//host”), or [XML] class methods (GetParentNode()), edit it with $xml.CreateElement()/$node.AppendChild(), save it, and fire up in Visual Studio.  Interactively.

Normally, I would have to write a program to do it, compile it, run it, (wait for it to bomb out), and then manually open the file in VS.

I can write a script to fire up a new Internet Explorer, have it navigate to a URL, then start filling out fields in forms and expand dynamic content (with great difficulty)  using COM automation.  Or do the same thing in Excel – take a CSV file, dump it into Excel, reformat it (this table style, add these formulas, that column width) and save it.

I can resize the console window.  Or change the title.  Or increase the buffer (really?  Only 300 lines by default?)  Or copy the contents of the buffer, complete with colors.


CMD is the older “shell” or “batch” language going back in some form ( to the original DOS operating system.

PowerShell is the new and vastly improved shell and programming language first made available as an add-on and now included (and now enabled) in the Windows operating systems.

While CMD is technically a “programming language” it is a very poor one where it is difficult to define general purpose variables, assign those values, and to use modern control structures.

For example CMD includes “if/then/else”, call (subrouting label or batch file), and an odd form of “for” (really a “foreach”) loop but other than that is largely dependent on “goto” while PowerShell has a excellent flow of control structures.

PowerShell was designed to off an EFFECTIVE superset of CMD functionality which it come close to doing directly AND it can call CMD for anything that isn’t available directly.

PowerShell supports not only calling CMD as an “external program” but calling any other pre-existing or new program as an external extension to its own functionality.

PowerShell has direct access to almost all of DOTNET, and easy access to COM and even C#, F# and other DOTNET languages.

PowerShell passes and returns OBJECTS (structured data with methods, events, and properties) to other commands and returns these objects from most commands as well while CMD produces only text.

This idea of ‘objects’ is extremely important as is removes the need to “parse” results based on line position or characters on the lines – -instead one merely queries the objects for the properties available.

PowerShell also now includes LITERALLY THOUSANDS of functions, cmdlets, and other funtionality and is extemely extensible.

PowerShells regular expressions are comparable to those in Perl, or perhaps arguably more powerful can capable. One of the main things about regular expresions in Perl is that besides being very extensive they are a NATURAL part of the language — PowerShell comes close to this ideal of integration established by Perl for regular expressions.

PowerShell includes extensive and excellent error handling.

PowerShell includes extensive, extensible, and excellent Help.

PowerShell’s main deficiencies are these:

  • Not available by default on every version of WIndows still commonly in use (Win2003/XP
  • Not enabled even when present until the latest versions of Windows — Win2012 and Windows 8.


Cmd is an antiquated CLI that mainly executes console programs that live on your file system, and processes the output they provide as text. Powershell is an object oriented shell/programming language that processes objects. It uses a ‘pipeline’ approach, much like bash, except instead of processing text in the pipeline, it processes objects. This allows you to call upon that object’s properties and methods, making it a very powerful tool to have in the toolset. It’s a .NET language, so you can actually build full featured apps with it, like Windows Forms apps, WPF apps etc, as well as more traditional scripts.

Powershell can also invoke the same command line utilities that CMD does, such as ping or netstat, but it also has Cmdlets. Cmdlets are not console programs, but actually functions that either live inside compiled DLLs on your system, typically written in C#, or Powershell based functions that live inside Powershell modules.

Powershell can do everything CMD does and much, much more. In fact one of its primary purposes was to replace both CMD, batch files and vbScripts.


CMD is basically an updated version of DOS prompt. It’s very simple, and its only commands are related to file management (which is not needed anymore as we have Windows Explorer), everything else is done by executing .exe files or other batch files which are made of the same commands. Powershell, on the other hand, is designed for automation. It has much more advanced commands which can often do what you can’t do with CMD, or which is incredibly hard to do with it. For example, the command Add-AppxPackage installs a Windows Store application from an .appx file – that’s called sideloading. This action is very complicated because of the nature of Windows Store apps, which are deeply itegrated with Windows Store and the shell itself. It can’t be done with CMD at all.

Powershell also has a more advanced support for variables, objects, making it as powerful as other scripting languages. Batch is also technically a scripting language, but it only has very simple functionality: even loops need to be implemented using the “goto” command, which is one of the many things that make the code hard to read.


You can run CMD commands in Powershell, but not the opposite.

POwershell is much more powerful and allows a much more modern and sound scripting.

Then Powershell uses object, not strings.

In other words, you can take the output of a command and reuse it in other commands much more easily than how you’d do with a traditional CMD.

Then Powershell comes from C#, so it’s much more powerful as a programming tool.


Just some additional information, since powershell is integrated with .net so you can do some inline c# with powershell to make it even more powerful which you can never have a chance with cmd prompt. Also it can start jobs to make parallel processing of different {script block}.


There’s a fairly large difference between these two. At the end of the day, they both execute sequences of actions; PowerShell is an extreme evolution past CMD, much more in line with what a ‘proper’ shell should be.

PS is great for its integration with .NET.


roughly 30 years of progress in scripting languages.

Really, cmd has not any significant progress since the DOS era.  It was replaced, first by  Windows Scripting Host and now by powershell.

I seem to vaguely recall QBasic and earlier dialects.. not sure where those fit in.  At least you could rename files across directories.


PowerShell has a pipeline based on typed .NET objects, while cmd (and most other command line processors, like Unix based shells, usually) have a pipeline based on strings of characters.

Command Prompt vs. PowerShell: What Are the Differences?

Being a Windows user, you don’t have to deal with the command line interface for daily activities. That being said, for any advanced tasks the command line provides greater flexibility and control over the task. In fact, that’s the sole reason why Windows has both the Command Prompt and PowerShell. Since both are command line interfaces, the PowerShell and Command Prompt may look similar at first glance. But there are significant differences between both of them. Let us get to know what PowerShell and Command Prompt actually mean and how PowerShell differs from Command Prompt.

What is Command Prompt

Command Prompt is the default command line interface provided by Microsoft starting from Windows NT (Windows NT 3.x and above). It is a simple win32 application that can interact and talk with any win32 objects in the Windows operating system. It has a user-friendly command structure and is widely used to execute batch files, troubleshoot Windows problems, perform advanced actions, get information, etc. Due to its user interface and command line structure, many call it “the DOS prompt,” though it doesn’t have anything to do with MS-DOS.


What Is PowerShell

The first version of PowerShell, which is based on the .NET framework, was released way back in 2006 and is much more advanced than the Command Prompt. PowerShell has many different advanced features like command piping, task automation, remote execution, etc.


On the other hand, PowerShell deeply integrates with the Windows operating system while still providing interactive command line interface and scripting language. Considering the deep integration and support for the scripting language, it is often used by system administrators and IT professionals to perform task automation and configuration management.

How PowerShell Differs from Command Prompt

The PowerShell is much more advanced in terms of features, capabilities and inner workings when compared to the legacy Command Prompt. In fact, almost every under-the-hood module of Windows can be exposed by PowerShell, thus making it a useful tool for IT professionals, system administrators, and power users.

When I say PowerShell, most of you may think of the standard command-line interface, but Windows also comes with PowerShell ISE (Integrated Scripting Environment) which helps you create custom and complex PowerShell scripts for all kinds of works.


Moreover, PowerShell uses what are known as cmdlets. These cmdlets can be invoked either in the runtime environment or in the automation scripts. Unlike the Command Prompt or even the *nix shell, the output generated from a cmdlet is not just a stream of text (strings) but a collection of objects.

Since PowerShell treats them as objects, the output can be passed as an input to other cmdlets through the pipeline. This allows you to manipulate the data as much as you want without seeking the help of complex Reg expressions. This is just not possible in the Command Prompt.

When you compare all this to the legacy Command Prompt, you will find it painfully inferior to the PowerShell in terms of both functionality and how much you can do with it.

But all this power of PowerShell comes at a cost; that is the learning curve. If you don’t mind the steep learning curve, then do give PowerShell a try. Of course, if you are entering into the realm of system administration, then you definitely need to learn PowerShell to make your life easier.

If you are an average Windows user who hardly uses the Command Prompt, then you might not get much out of the PowerShell.

What problem did MS solve by creating PowerShell? [closed]

32down votefavorite


I’m asking because PowerShell confuses me.

I’ve been trying to write some deployment scripts using PowerShell and I’ve been less than enthused by the result. I have a co-worker who loves PowerShell and defends it at every turn. Said co-worker claims PowerShell was never written to be a strong shell, but instead was written to:

a) Allow you to peek and poke at .NET assemblies on the command-line (why is this a reason for PowerShell to exist?)

b) To be hosted in .NET applications for automation, similar to DCOP in KDE and how Gnome is using CORBA.

c) to be treated as “.NET script” rather than as an actual shell (related to b).

I’ve always felt like Windows was missing a decent way to bang out automation scripts. cmd is too simplistic in many cases, and WSH is too obtuse (although the combination can be used successfully, I’m not a fan). When I first heard about PowerShell I felt like Windows was finally getting a decent shell that would be able to help with automation of many tasks, but recent experiences, and my co-worker, tell me otherwise.

To be clear, I don’t take issue with the fact that it’s built on .NET, or that it passes objects around rather than text (despite my Unix background :]), and I’m not arguing that PowerShell is useless, but from what I can see, it doesn’t solve the problem I was hoping it would solve very well. As soon as you step outside of the .NET/Powershell world, things quit being nice and cozy for you.

So with all that out of the way, what problem did MS solve by creating PowerShell, or is it some political bastard child as I suspect? I’ve googled and haven’t hit upon anything that sufficiently answered that for me, but the more citations the better.


PowerShell was actually built as several things: A mature and extensible automation platform and a modern administration shell.

The former is primarily used for the administration GUIs for Exchange and other server products of recent times. The GUI is just a wrapper around PowerShell which does the heavy lifting behind (kind of like UNIX GUI programs come to be, as a wrapper for a commandline program).

Jeffrey Snover (PowerShell inventor) elaborates a little on how PowerShell was created with which goals and problems it should solve.

In my opinion, PowerShell as a shell is clearly intended as a replacement for cmd (easy to see) and Windows Script Host (Windows Script Host didn’t get much attention in recent years, even though it had similar concepts as .NET back in its day [one platform, multiple languages with ActiveScripting], but with .NET Microsoft basically put that to rest and resurrection probably wasn’t an option for them).

It unifies most aspects of Windows administration in common concepts and methods you only have to learn once. Also, the power in PowerShell for me stems greatly from it passing around objects which might explain why you get into problems when you step out of the .NET/PowerShell world where you only get a String[] from a command. But for many things you would call an external program for in cmd, there is a cmdlet which will do it for you.


As a developer, I can tell you I no longer have a bunch of ConsoleApplication42 projects laying around in a folder.

As a developer at a small company where I pretty much do all things IT (DBA, manipulate routers, pull call detail records from the switch, monitor and graph bandwidth for customers, etc…) I can tell you that PowerShell fills a sorely needed gap in Windows and the fact that it’s built on .NET provides a seamless upgrade path when the PowerShell pipeline is too slow to handle millions of iterations or a more permanent, strongly typed implementation is needed.

Anyway, I guess the question is why are you switching to PowerShell if you don’t have a pressing need? I mean it’s good to learn it now since it’s basically the new management interface for all things Microsoft. But if that doesn’t affect you then don’t bother if you don’t think you’re gaining anything.

EDIT (In response to comments below)

It sounds like you’re trying to use the .NET Process class to launch an exe and redirect it’s stdout so it can be read by the caller. I agree that is a pain in .NET but fortunately PowerShell does all this for you pretty simply. As for capturing the result and still writing it to the display, that’s pretty simple too though the command isn’t a very well-known one because it isn’t used that often. Here’s an example:

# I always find it easier to use aliases for external commands
Set-Alias csc C:\Windows\Microsoft.NET\Framework64\v3.5\csc.exe

# Create some source file
Set-Content test.cs @"
class Program {
    static void Main() {
        System.Console.WriteLine("Hello World");

# Call CSC.EXE
# the output of csc.exe is written to results.txt and piped
# to the host (or select-string if you prefer)
csc test.cs | Tee-Object -file results.txt

# Check for errors
    # this is where community extensions would come in
    # handy. powershell 2.0 also has a command to send
    # mail but in 1.0 you can grab one from

Disadvantages of Windows PowerShell

Windows PowerShell is a task-based command-line shell and scripting language designed especially for system administration that is used by information technology professionals on a regular basis. Although it is developed by Microsoft and is widely available, it does have some potential drawbacks that IT professionals may not appreciate. If you are learning how to use the language, issues could arise at some point.


One of the potential disadvantages of using Windows PowerShell is that it is object-based. With most shells, you rely on text-based commands to get the job done when writing scripts. If you switch to Windows PowerShell from some other type of shell, you’ll have to get used to a different way of thinking about things. This can be a problem that takes some time to get past for some users.

Security Risks

Another potential drawback of using Windows PowerShell is that it can create some potential security risks. Many IT professionals use it as a way to connect remotely to other computers and servers. When engaging in this process, PowerShell can leave some holes open for security breaches. This creates a potential for viruses, malware or other harmful programs to be installed in the server. If someone else who knows Windows PowerShell gets involved, it could cause problems.

Web Server

Another issue with Windows PowerShell is that it requires you to run a Web server on your server when utilizing remote functionality. This takes up additional space on a server. In many cases, companies will not want to take up more room and designate more resources to this on their own servers. If you are an IT professional working for a company, you may have to get approval from a higher-up before this is allowed.


Although Windows PowerShell does have some potential drawbacks, it also has a few advantages. For example, since it is developed by Microsoft, it is being integrated more and more into Microsoft products and services. Windows PowerShell is also versatile and easy to administrate once you learn the basics of the scripting language. It also gives you the ability to run specific commands that are designed to run only on local networks if you are using the remote connection function.

How PowerShell Differs From the Windows Command Prompt

Windows 7 added PowerShell, a more powerful command-line shell and scripting language than the Command Prompt. Since Windows 7, PowerShell has become more prominent, with it even becoming the default choice in Windows 10.

PowerShell is more complicated than the traditional Command Prompt, but it’s also much more powerful. The Command Prompt is dramatically inferior to shells available for Linux and other Unix-like systems, but PowerShell competes favorably. In addition, most Command Prompt commands are usable in PowerShell, whether natively or through aliases.

How PowerShell Differs From the Command Prompt

RELATED: 5 Cmdlets to Get You Started with PowerShell

PowerShell is actually very different from the Command Prompt. It uses different commands, known as cmdlets in PowerShell. Many system administration tasks — from managing the registry to WMI (Windows Management Instrumentation) — are exposed via PowerShell cmdlets, while they aren’t accessible from the Command Prompt.

RELATED: Geek School: Learning How to Use Objects in PowerShell

PowerShell makes use of pipes—just as Linux does—that allow you to pass the output of one cmdlet to the input of another cmdlet. Thus, you can use multiple cmdlets in sequence to manipulate the same data. Unlike Unix-like systems—which can only pipe streams of characters (text)—PowerShell pipes objects between cmdlets. And pretty much everything in PowerShell is an object, including every response you get from a cmdlet. This allows PowerShell to share more complex data between cmdlets, operating more like a programming language.

PowerShell isn’t just a shell. It’s a powerful scripting environment you can use to create complex scripts for managing Windows systems much more easily than you could with the Command Prompt.

The Command Prompt is essentially just a legacy environment carried forward in Windows—an environment that copies all of the various DOS commands you would find on a DOS system. It is painfully limited, can’t access many Windows system administration features, is more difficult to compose complex scripts with, and so on. PowerShell is a new environment for Windows system administrators that allows them to use a more modern command-line environment to manage Windows.

When You Would Want to Use PowerShell

So, when would an average Windows user want to use PowerShell?

RELATED: How To Troubleshoot Internet Connection Problems

If you only rarely fire up the Command Prompt to run the occasional ping or ipconfig command, you really don’t need to touch PowerShell. If you’re more comfortable sticking with Command Prompt, it’s not going anywhere. That said, most of those commands work just fine in PowerShell, too, if you want to try it out.

RELATED: How to Batch Rename Multiple Files in Windows

However, PowerShell can be a much more powerful command-line environment than the Command Prompt. For example, we’ve shown you how to use the PowerShell environment built into Windows to perform a search-and-replace operation to batch rename multiple files in a folder—something that would normally require installing a third-party program. This is the sort of thing that Linux users have always been able to do with their command-line environment, while Windows users were left out.

However, PowerShell isn’t like the Linux terminal. It’s a bit more complicated, and the average Windows user might not see many benefits from playing with it.

System administrators will want to learn PowerShell so they can manage their systems more efficiently. And if you ever need to write a script to automate various system administration tasks, you should do it with PowerShell.

PowerShell Equivalents of Common Commands

Many common Command Prompt commands—from ipconfig to cd —work in the PowerShell environment. This is because PowerShell contains “aliases” that point these old commands at the appropriate new cmdlets, running the new cmdlets when you type the old commands.

We’ll go over a few common Command Prompt commands and their equivalents in PowerShell anyway—just to give you an idea of how PowerShell’s syntax is different.

Change a Directory

  • DOS:  cd
  • PowerShell:  Set-Location

List Files in a Directory

  • DOS:  dir
  • PowerShell:  Get-ChildItem

Rename a File

  • DOS:  rename
  • PowerShell:  Rename-Item

To see if a DOS command has an alias, you can use the Get-Alias cmdlet. For example, typing  Get-Alias cd shows you that  cd is actually running the  Set-Location cmdlet.

REVISE: Mcafee VirusScan Enterprise on Domain Controllers

Mcafee VirusScan Enterprise on Domain Controllers:

Recommended exclusions for VirusScan Enterprise on a Windows Domain Controller with Active Directory or File Replication Service.

The following list is files and folders that do not need to be scanned. These files are not at risk of infection and might cause serious performance issues due to file locking, if included. Where a specific set of files is identified by name, exclude only those files instead of the whole folder. Sometimes the whole folder must be excluded. Do not exclude any of these based on the filename extension.

Active Directory and Active Directory-Related Files

Main NTDS Database Files

The location of these files is specified in the following registry key:

[HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters\DSA Database File]

The default location is %windir%\ntds.

Exclude the following files:


Active Directory Transaction Log Files

The location of these files is specified in the following registry key:

[HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters\Database Log Files Path]

The default location is %windir%\ntds.

Exclude the following files:

EDB*.log (the wildcard character indicates that there may be several files)




NTDS Working Folder

The location of these files is specified in the following registry key:

[HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters\DSA Working Directory]

Exclude the following files:

Temp.edb Edb.chk

File Replication Service (FRS)

The location of these files is specified in the following registry key:

[HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NtFrs\Parameters\Working Directory]

Exclude the following files:

FRS Working Dir\jet\sys\edb.chk

FRS Working Dir\jet\ntfrs.jdb

FRS Working Dir\jet\log\*.log

FRS Database Log files

The location of these files is specified in the following registry key:

[HKEY_LOCAL_MACHINE\system\currentcontrolset\services\NtFrs\Parameters\DB Log File Directory]

The default location is %windir%\ntfrs. Exclude the following files:

FRS Working Dir\jet\log\*.log (if registry key is not set)

DB Log File Directory\log\*.log (if registry key is set)

Staging folder

The location of these files is specified in the following registry key and all of the Staging folder’s sub-folders:

[HKEY_LOCAL_MACHINE\system\currentcontrolset\services\NtFrs\Parameters\Replica Sets\GUID\Replica Set Stage]

The current location of the Staging folder and all of its sub-folders is the file system reparse target of the replica set staging folders. The location for staging defaults to %systemroot%\sysvol\staging areas.

The current location of the SYSVOL\SYSVOL folder and all of its sub-folders is the file system reparse target of the replica set root.The location for SYSVOL\SYSVOL defaults to %systemroot%\sysvol\sysvol.

FRS Pre-Install Folder

The location of these files is specified in Replica_root\DO_NOT_REMOVE_NtFrs_PreInstall_Directory

The Preinstall folder is always open when FRS is running. In summary, the targeted and excluded list of folders for a SYSVOL tree that is placed in its default location would look similar to the following:

%systemroot%\sysvol Exclude

%systemroot%\sysvol\domain Scan

%systemroot%\sysvol\domain\DO_NOT_REMOVE_NtFrs_PreInstall_Directory Exclude

%systemroot%\sysvol\domain\Policies Scan

%systemroot%\sysvol\domain\Scripts Scan

%systemroot%\sysvol\staging Exclude

%systemroot%\sysvol\staging areas Exclude

%systemroot%\sysvol\sysvol Exclude

REVISE: Change RDP Default Port


Question: How to change the listening port for Remote Desktop

To change the port that Remote Desktop listens on, follow these steps. 

Important This section, method, or task contains steps that tell you how to modify the registry. However, serious problems might occur if you modify the registry incorrectly. Therefore, make sure that you follow these steps carefully. For added protection, back up the registry before you modify it. Then, you can restore the registry if a problem occurs. For more information about how to back up and restore the registry, click the following article number to view the article in the Microsoft Knowledge Base:

322756 How to back up and restore the registry in Windows

  1. Start Registry Editor.
  2. Locate and then click the following registry subkey:HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\PortNumber
  3. On the Edit menu, click Modify, and then click Decimal.
  4. Type the new port number, and then click OK.
  5. Quit Registry Editor.
  6. Restart the computer.

Note When you try to connect to this computer by using the Remote Desktop connection, you must type the new port. Maybe you have to set the firewall to allow the new port number before you connect to this computer by using the Remote Desktop connection.

Question: Change Remote Desktop RDP Port

Port 3389 is the home of the remote desktop protocol that powers Remote Desktop Services on all modern versions of Windows.  If your system has Remote Desktop enabled, it is listening for connections on port 3389.  Since this port is both well known and can be used to attack accounts, it is low hanging fruit for script kiddies and bots looking for an easy target.

Theoretically on a system that does not have an account lockout policy in place, which by the way is not a system default, the RDP protocol can be used to get the administrator password with brute force.  Brute force is a fancy way of saying trying all possible passwords.  If the system never locks out the account then time is the only barrier to eventually getting you password and logging in. 

The first defense is to implement a good account lockout policy but that does not solve the entire problem.  Any administrator of a public facing Windows web server will notice that their server is continiously attacked by bots looking for an easy target.  The bots will often lock out your accounts which can be very annoying.

To protect your system from the bots and script kiddies I always reccomend changing the default RDP port.  This will not fool an intelligent attacker but it will weed out the noise.

Follow these steps to change the Remote Desktop server port:

  1. Open up Registry Editor by clicking on the Start Button, type in regedit and then hit Enter.
  2. In Registry Editor, navigate to HKEY_LOCAL_MACHINE, SYSTEM, CurrentControlSet, Control, Terminal Server, WinStations and RDP-Tcp. 
  3. Right click on the PortNumber dword and select Modify. 
  4. Change the base to Decimal and enter a new port between 1025 and 65535 that is not already in use.
  5. Click OK and reboot.

Make sure to reboot to activate the change.

Keep in mind that the next time you want to connect to your system with RDP you will need to provide the port number.  You can do that from the Remote Desktop client by appending a colon after the host name or ip address followed by the port number.  For example, if I have a computer with host name of tweak with RDP running on port 1234 I would use tweak:1234 in the remote desktop client hostname field.

Question: Windows Remote Desktop: Configuring Your Firewall and Router

Windows Remote Desktop: Configuring Your Firewall and Router

By Eric Geier

In Part 1 of this tutorial series, we configured Windows to accept remote desktop connections, so we can log into and use a PC anywhere in the World with Microsoft’s Remote Desktop Connection client application. In Part 2, we configured Windows to accept remote connections via a Web browser, so the client application doesn’t have to be installed on the computer from which you are connecting.

However, neither of these methods will work until your firewall is configured to allow remote connections. This tutorial addresses that. Plus, to connect to your PC via the Internet, your router must be properly configured.

Windows Remote Desktop
» Windows Remote Desktop: Setting Up Traditional Access » Setting Up Web Access

In this tutorial, we’ll tell the firewall on the PC that’s hosting the remote connection that it is okay to allow incoming connections on the appropriate port. We’ll also tell your router where to forward remote desktop connections. Let’s get started.

Letting the Traffic Past Your Firewall

Since you will be trying to connect to your PC from the local network or Internet, your firewall software must be configured to let the traffic through. Enabling the Remote Desktop feature on Windows automatically configures Windows Firewall with the appropriate settings; however, you must manually configure any other third-party firewall software you have installed on your computer. To do this, add UDP port 3389 (which Remote Desktop uses) to your firewall’s authorized list. If needed, refer to the help and documentation of the firewall program for assistance.

It’s possible to change your Windows Firewall settings and accidentally mess up the setting automatically made when you enabled Remote Desktop. Therefore, to be on the safe side we’ll verify Remote Desktop connections can pass through.

If you are also setting up Web access to the Remote Desktop Connection, you must add TCP port 80 (or the port you choose for IIS if you changed from the default) to your Windows Firewall and any other third-party firewall. Windows doesn’t automatically add this port to the authorized list, so you will have to do it yourself.

Follow these steps in Windows Vista to verify the Windows Firewall settings or add the Web access port:

All incoming connections are
(Click for larger image)
  1. Click the Start button and choose Control Panel.
  2. On the Control Panel window, under the Security category, click the Allow a program through Windows Firewall link. If User Account Control is enabled, select an account and enter a password, if required, and click Continue on the prompt.
  3. On the Windows Firewall Settings window that opened, click the General tab.
  4. Make sure the Block all incoming connections check box is NOT checked; as Figure 1 shows.
  5. Click the Exceptions tab and scroll down to make sure the Remote Desktop item is checked; as Figure 2 shows. This verifies Windows Firewall is set to allow the traditional Remote Desktop Connections.
  6. If you are setting up Web access with IIS, as well, click the Add Port button. Then, on the Add a Port dialog box, type in a Name (such as Remote Desktop Web Connection) and enter the default port 80 or the port you manually changed IIS to into the Port Number field, select TCP for the Protocol, and click OK.
  7. When you’re done, click OK.

Even if all incoming
connections are blocked,
exceptions can be made
(Click for larger image)

If you’re using Windows XP, here’s how to verify the Windows Firewall settings and/or add the Web access port:

  1. Click the Start button and choose Control Panel.
  2. On the Control Panel window, click the Security Center category.
  3. On the Windows Security Center window that opened, near the bottom of the window, click the Windows Firewall icon.
  4. Make sure the Don’t allow exceptions check box is NOT checked.
  5. Click the Exceptions tab and scroll down to make sure the Remote Desktop item is checked.
  6. If you are setting up Web access with IIS, as well, click the Add Portbutton. Then on the Add a Port dialog box, type in a Name (such as Remote Desktop Web Connection) and enter the default port 80 or the port you manually changed IIS to into the Port Number field, select TCP for the Protocol, and click OK.
  7. When you’re done, click OK.

If you are using other third-party firewall utilities, make sure you add these ports to them as well. If you find you’re having problems later when connecting, consider disabling all firewall software except Windows Firewall.

Configuring Your Router

If your PC isn’t directly connected to your Internet modem, and it is running through a wired or wireless router, you must configure the router to connect to the Remote Desktop connection via the Internet. This configuration lets your router know where to direct Remote Desktop connections that originate from the Internet.

Configuring your router consists of setting it to forward data, which comes in to certain ports, to the computer you have set up with the Remote Desktop Connection. For either Windows XP or Vista, TCP port 3389 (which Remote Desktop uses) must be forwarded to the Remote Desktop PC. If you are setting up Web access, you also must forward TCP port 80 (or the non-default port you set) to the host computer.

If you aren’t sure exactly how to set up these port forwards, these steps should help:

  1. Access your router’s Web-based configuration utility by bringing up your Web browser, typing in the IP address of your router, and pressing Enter. If you don’t know the IP address, see your router’s documentation or reference the Default Gateway value that’s given in the connection status details of Windows.
  2. When prompted, enter the username and password of your router. You should have set these login credentials when you had set up your router; however if not, you can reference the default values in the router’s documentation.
  3. Find the Virtual Server or Port Forwarding tab of the router’s administration screens.
  4. Enter the port details, for each port you need to forward (discussed in the previous paragraphs) by entering information into the appropriate text boxes or selecting options from list boxes. Figure 3 offers an example. 
    Port details, for each port to be
    (Click for larger image)You may have to enter a name, which would be for your reference, like remote desktop or remote desktop Web access. Sometimes you can pick the computer (identified by the Computer Name) you want to forward to from a drop-down menu list, or you may have to enter the IP address of the computer. You can find your computer’s IP address by referencing the connection status details of Windows. Lastly, you’ll probably have to enter the port you want to forward, which were given earlier for both Remote Desktop and Web access.
  5. Click a Save or Apply button.

Now you must make sure the port(s) are always forwarded to the correct computer. If you are using dynamic IP addresses on your local network (which is the default method), meaning they’re automatically assigned to your computers using the router’s DHCP server, you’ll need to do some additional configuration. You must assign a static IP address to at least the computer that’s going to be hosting the Remote Desktop Connection. This is because the IP address you just set up to forward the ports to will sometime be given to another computer or become unused if it’s being automatically assigned.

You have two ways you can go about giving your computer a permanent IP address. You can reserve an IP address for the computer in the router’s configuration utility, if your router supports it. This is preferred so you don’t have to change your computer’s actual settings and connecting to other networks will be much easier. However, if the feature isn’t available you can always manually assign your computer (network adapter) with a static IP address in Windows, such as Figure 4 shows.

Stay tuned-in for the final installment of this series, where we’ll connect to the remote desktop connection via the client application and via Web access. Plus, we’ll discuss how to overcome having a dynamic (changing) IP address.

About the Author: Eric Geier is the Founder and President of Sky-Nets, a Wi-Fi Hotspot Network. He is also the author of many networking and computing books, including Home Networking All-in-One Desk Reference For Dummies (Wiley 2008) and 100 Things You Need to Know about Microsoft Windows Vista (Que 2007).

Question: Change RDP Port of Windows 2012 R2 server running in amazon AWS [duplicate]


This question is an exact duplicate of:

How do I change the RDP port number of Windows 2012 R2 server running in Amazon AWS? I could change the port number by going here:


but then could not RDP into server even after changing the inbound rules of security group.


This is resolved. I had to change the firewall settings to allow inbound connection on new port. –

Question: Changing the RDP Port on Windows 10

By default, remote desktop connections on windows use port 3389. If you find the need to change this port, the following should help. Make sure you have “Allow remote connections to this computer” checked under “System Properties > Remote” before you begin.

In my experience, you should avoid changing the mapped port for core Windows services if possible, as this can cause numerous configuration and management issues. Other options include:

– Using port mapping (forwarding) on your router (e.g. externalip:10000 -> serverip:3389), however not all routers offer this functionality.

– Installing a third party remote desktop app, like Chrome Remote Desktop or LogMeIn, however these require specific software and/or subscriptions

– Deploying a server/PC as a RDP “gateway”. You then access all further RDP hosts from this first point of contact.

– Using a RD gateway/RD Web access. This requires a server with the appropriate role installed, but can optionally be configured with two-factor authorisation like Duo.

To check what port your RDP is currently listening on, use the netstat command in an elevated command prompt.

netstat -abo

This will show information about current network connections and listening ports, as well as associated executables and processes. You’ll see port 3389 bound to “svchost.exe” on “TermService”.

To change the bound port you’ll need to open an elevated command prompt and run regedit.


Navigate to the PortNumber setting.

HKEY_LOCAL_MACHINE > SYSTEM > CurrentControlSet > Control > Terminal Server > WinStations > RDP-Tcp

Right click on the “REG_DWORD” named “PortNumber” and hit “Modify”. Change the base to Decimal and enter the new port (between 1025 and 65535). You can use NetStat to check if a particular port is already bound to a process.

Once you’ve changed the value, exit RegEdit and either reboot the computer, or simply restart the Remote Desktop Services service using the “Services” snap-in in “Computer Management”. You can confirm the port has been changed by running netstat again (in my case, to 10000).

Finally, open up Windows Firewall and add a new inbound rule for the new port. You won’t be able to change the existing rule as that’s a core system rule, but copy across the values into a new rule and you’ll be good to go.

Question: How to change Remote Desktop port (RDP) on Windows server on VPSie

This short tutorial will explain how to change the RDP (Remote Desktop port) server is listening on for use with Private Cloud Solution (PCS) with one public IP when client have more than Windows guests within his Private Cloud

In this example we will change default port 3389 with port 6000:

1- Login from console to DB server –

Click on the Windows VPSie you want to change the port for – Access then console access

Click Ctrl-ALt-Delete so you get login prompt:

User username : Administrator / Password emailed to you upon VPSie creation  (If not changed by you)

Once logged in :

2- Click simultaneously on the Windows logo + R to open the “Run” dialog and execute the “cmd” command –  Valid for 2012 R2, Or CMD on  2008 server editions.

3- Open the registry editor by typing the “regedit” command

4- Search for this registry subkey: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp\PortNumber

5- Double-click or right-click on the “PortNumber” registry subkey, select the decimal base and type the port number of your choice (the default port is 3389, in this example, we selected port 6000). Click on “Ok” to save your selection.

6- Exit the registry editor

7- Restart your server

Note: Firewall is already enabled by default and allowing RDP connection on all VPSie(s).

Upon reboot you should be able to access RDP on the new port 6000.

In case new port is not allowed in Windows firewall – this is a quick method to add it :

ControlPanel–>System Security–>Windows Firewall–>Advance Settings–>inboundRules–>New Rule.

Select ports –> Input 6000 –> allow ALL incase you want to RDP one that port locally  or public –> give it a name –> you done.

Question: Change Remote Desktop Port back to 3389

Hello Everyone,

I need a bit of ideas on a decent way to push out a GPO to change/verify that the RDP port on workstations are the default 3389.

The old Network admin changed the ports and had wan ip’s pointing to them. This now makes it tricky to RDP to the machines w/o knowing the port.

I figured I could run a registry script to change the port.

Any ideas?


Hi Andrew,

First, you need to find out how the previous admin changed the port: script? GPO? because it might get re-applied.

Second, you can check the registry key for the currently used port for RDP (same key for WIndows 7 too): SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\PortNumbersource:

The easiest thing to do for applying the “new” RDP port to the entire domain pc will be through GPO:

Computer Configuration > Preferences > Windows Settings > Registry

Just add a new registry item pointing to the above key:

Action: Update

Hive: SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp

Value Name: PortNumber

Value type: REG_WORD

Value data: 3389 (make sure to select the base as Decimal on bottom right)

Question: Change Remote Desktop Gateway Port

Q: Can I change the port that Remote Desktop Gateway uses?

A: By default, the Remote Desktop (RD) Gateway component that encapsulates RDP in HTTPS packets listens on port 443 (for TCP) and port 3391 (for UDP). You can use the RD Gateway Manager utility to change this as follows:

  1. Right-click the RD Gateway server name in the navigation pane and select Properties.
  2. Select the Transport Settings tab.
  3. Modify the HTTP and/or UDP port number. Set the custom port value to the same port if you change them, because there’s no way to do so in the client.
  4. Specify different custom ports for UDP versus TCP, then click OK.

Note that two firewall exceptions are enabled by default; however, they use the default ports, so you’ll need to add your own firewall exceptions for TCP and UDP for the custom port you selected.

When you connect from a client, you need to add the custom port to the end of the gateway server name, preceded by a colon (:); for example, Note that this is only an RDP client that supports RDP 8.0 or later.

If you’re using RemoteApp, you need to manually update the gateway in the RDP file with the correct port because you can’t change it via Server Manager to specify a custom port for the gateway. You can modify the port used for the gateway by connecting to the Remote Desktop Session Host and navigating to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Terminal Server\CentralPublishedResources\PublishedFarms\\DeploymentSettings, then editing the DeploymentRDPSettings value to add the port to the part of the string. Note that you must set this before you publish any applications. A nicer way is to use PowerShell:

Set-RDSessionCollectionConfiguration –CollectionName “Your Collection” -CustomRdpProperty “gatewayhostname:s::” -ConnectionBroker


Question: Change RDP listening port without System Reboot

We can easily change remote desktop listening port to some other port than the default 3389.

By default RDP listens on TCP 3389. Once we change this port to some other port , We may need to restart the system to activate the new listening port .

open registry editor and navigate to

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp

Then click on PortNumber and select radio button Decimal which will show value in Decimal. Now enter a desired port number and close the registry editor.

To activate this new port , press Windows+R on keyborad which opens up run window. Type services.msc in run window and press enter now we can restart “Remote Desktop Services” from Sevices list . This will cause any existing remote desktop connection to close and one may need to reopen remotedesktop on new port by entering IPADDRESS:NEWPORT on Remote Desktop client (mstsc)

Question: How to Change Remote Desktop Port in Windows

Remote Desktop is a very useful feature of Windows operating system that allows the user to remotely connect to the computer from any computer to the computer where RDP is enabled. By default, Remote Desktop uses port 3389. Since this is a common port, and if RDP is enabled on Windows, it will use this port which poses a security risk therefore it is highly recommended to change this port.

You can implement an account lockout policy to lock the account after (X Number of failed log-in) attempts. However, you don’t want anyone to login to the RDP and attempt brute force attacks.

How to Change RDP (Remote Desktop Port)

There are two methods to change the default Remote Desktop Port. Let’s have a look at both methods.

Method 1: Change RDP through Microsoft Fix It Utility

Click on this link to download Microsoft Fix it utility. When the file download dialogue box appears, click “Run”.

Follow the instructions in the Fix it wizard. Type a new port number between 1025 and 65535 in the PortNumbertext box. Make sure the port is not in use already, otherwise it can conflict and won’t work.


PRO TIP: If the issue is with your computer or a laptop/notebook you should try using the Reimage Plus Software which can scan the repositories and replace corrupt and missing files. This works in most cases, where the issue is originated due to a system corruption. You can download Reimage Plus by Clicking Here

Method 2: Change RDP through Registry Editor

Hold the Windows Key and  Press R to open Run Dialog, In the run dialog box type regedit and click the OKbutton.


When registry editor opens, navigate to the following registry key:


On the Edit menu, click Modify, and then click Decimal. Type a new port number between 1025 and 65535, and click OK.


Note: When you change Remote Desktop Port, you must type the new port number when you try to connect to this computer. If you have a firewall installed, you may need to configure your firewall to allow the new port before you can connect to this computer using Remote Desktop.

Question: How to change default Remote Desktop listening port

By default, Remote Desktop Protocol (for Windows XP and Server 2003) listens on TCP port 3389

Each PC needs to listen on a different port in order for the Remote Desktop request coming in from the client PC to be forwarded by your router to the proper host PC.

To make a host PC listen on a port other than the default port 3389, you must edit the registry.




Run Regedit and go to this key: 


Control\Terminal Server\WinStations\RDP-Tcp

Find the PortNumber subkey and notice the value of 0x00000d3d (3389). 

Modify the port number to any unused port (ie-3390 or d3e in hex) and save the new value.


Reboot for this change to take effect.

Now you need to configure your router to forward the different port requests to the proper local IP address.

Question: Change Remote Desktop Port

By default, Remote Desktop listens on port 3389 (via TCP). Using a quick registry tweak, you can change that to any other valid port. The following steps describe the process:

  1. Start Registry Editor (by default, this is located at c:\windows\regedit.exe).
  2. Go to the following registry key:
  3. Open the PortNumber subkey.
  4. Pick the Decimal Base option.
  5. Enter the new port number, and then click OK.

Does Remote Desktop send traffic over any other ports?

Primary remote desktop traffic will go over the one port defined above. If sound is enabled, streaming will be attempted over UDP directly. If this connection can’t be made, Remote Desktop will stream sound over a virtual channel via the main remote desktop port.

No other ports are used.

How to connect to a non-standard remote desktop port

To connect to a different port than the default 3389 RDP port, specify the port using one of the following formats:

  • <computername>:<port>
    example: computer:23389
  • <ip address>:<port>


By default Windows machines are remotely accessible via Remote Desktop on TCP port 3389. In certain situations changing the Remote Desktop port is useful, especially when you receive a huge number of failed Remote Desktop login attempts on default port. In order to change the default port of Remote Desktop, you’ll need to alter the registry of your server. Registry changes may cause some serious problems in your server when performed incorrectly, so be careful while performing the steps mentioned in this article.

Furthermore, to maintain the access to your server after changing the Remote Desktop port, be sure that you also change the port in Windows Firewall’s Remote Desktop Services rule OR create new rule with new RDP port by referring to this article. Otherwise, Windows firewall won’t allow you to access your server using new Remote Desktop port.


  1. Login to your Windows server via Remote Desktop.
  2. Click on Start >> Run >> Type REGEDIT and hit enter. This will open registry editor.

  3. Locate and click following registry subkey.


  4. Double click on the PortNumber registry subkey, select the Decimal base. Type new port number in Value data field and click OK to save the changes.

  5. Quit Registry Editor.
  6. Restart the VPS/server.



Change Windows VPS RDP port.

This article is based on Windows Server 2003 R2. Most of the steps should be similar for newer Windows Server versions.

Warning: Changing the default RDP port should be used as an addition to normal server security measures – not as an alternative.

Changing the default RDP port will require firewall and registry changes on the VPS, and changes to the RDP file configuration.

Warning: making these changes will prevent the default RDP file for your VPS(found in mPanel) from connecting to the VPS unless it is modified. If you are not sure you understand the changes being made, please back up your VPS first, or don’t change anything.


Part 1. Configuring The RDP File

1.1 Log into mPanel and browse to the remote access page for the VPS. Download the RDP file for your VPS, saving with a file name which can be easily identified as a modified version. e.g. newrdpport.rdp


1.2 Locate the downloaded RDP file to used to log into the VPS, then right-click it and select Edit from the menu dialogue.


1.3 In the Computer: field enter the IP address of your VPS followed by a colon and the new RDP port number you will be using. e.g.

1.4 Click the Save button in the ‘Connection settings’ section just below where you entered the VPS IP and new RDP port to use.

Part 2. Configuring the VPS Firewall

2.1 Open the Windows Firewall configuration dialogue on the VPS. Click the Start menu and go to Settings -> Control Panel -> Windows Firewall(usually the last Control Panel item).


2.2 Click the ‘Exceptions’ tab in the Windows Firewall window.


2.3 Click the ‘Add Port’ button to bring up a new configuration window.


2.4 Enter a name for the new firewall rule being created, and the same port number entered into your RDP file in Part 1.3 of the guide. Leave the default TCP option selected and click OK.


Name: Alternate RDP Port

Port Number: 23654

Note: If you will only be accessing your VPS from a static IP address(or addresses), you can add the IP address in the ‘Change Scope’ section using the ‘Custom List’ option for additional security.

2.5 Click the OK button to exit the Windows Firewall configuration and save the changes.

Part 3. Editing the Windows registry on the VPS

3.1 Open the registry editor on the VPS by clicking the Start menu, go to Run… and type in ‘regedit’ (without including the quotation marks) and click OK or hit the enter key.


3.2 In the registry editor, navigate to the registry key ‘HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp’.


3.3 Scroll down the right side of the window and double-click ‘PortNumber’.

3.4 Select the ‘Decimal’ option in the Edit DWORD Value’ window.

3.5 Enter the same port number you chose as your new RDP port in Part 1.3 of the guide, then click OK.


Part 4. Testing the changes

4.1 Restart your VPS with the facilities provided in mPanel or from the Shut down menu on the VPS.


4.2 Once mPanel indicates the VPS is back online and responding to pings, double-click the RDP file previously edited in Part 1. of the guide. A security warning dialogue will open. Check the ‘Don’t ask me again for connections to this computer’ box to prevent this popping up each time you want to connect to your VPS, then click Connect.


4.3 The VPS desktop login dialogue should appear if the change was successful. You should notice the port number in use is displayed in the window title. Enter your VPS login credentials and check your VPS’ desktop is as you left it.


4.4 After successfully logging into the VPS using the new RDP port the default RDP firewall rule can be disabled(deselected) and the configuration saved by clicking OK.


4.5 The new configuration can be tested by attempting to connect to the VPS with the original(unmodified) RDP file. Attempting to connect to the VPS with the original RDP file should fail.

Question: To change the default port for all new connections created on the Terminal Server

To change the default port for all new connections created on the Terminal Server

Run Regedit and go to this key:


Control’Terminal Server’WinStations’RDP-Tcp

  • Find the “PortNumber” subkey and notice the value of 00000D3D, hex for (3389).
  • Modify the port number in Hex and save the new value.

You can now connect to the new port by using the “old” Windows 2000 Terminal Server client.

A better option is to use the RDP client found in Windows XP, or even better,the newer Windows Server 2003 SP1 RDP 5.2 client.

You’ll need to configure your TS client to connect to the new port. Although changing the connection port on the RDP clients is quite easy, you CAN also change the connection port for the TS client. See Related Articles list for more info.

Question: How to: Custom RDP Port in Windows 2012


In this How-To, we will walk you through changing the RDP Port in Windows Server 2012.

Remote Desktop Protocol (RDP) is a protocol that allows you to connect and control another computer via an existing network making it a remote connection.  By default, Windows has assigned port 3389 as the default port to connect. To enable RDP on your Windows Server 2012, you can click here for more information.


– A Server with Windows Server 2012.  If you do not have a server already, why not consider a Windows Cloud Hosting from Atlantic.Net and be up in 30 seconds or less.

Change The RDP Port in Windows 2012

Connect to your server via Remote Desktop

On your keyboard hold down the Windows logo + R  buttons which opens the “Run” dialog and execute the “cmd” command and click OK

This is the Run command window in Windows Server 2012

This is the Run command window in Windows Server 2012

Type “regedit” and click enter

This is the regedit command in Windows Server 2012

This is the regedit command in Windows Server 2012

Navigate to the following Registry key


This is the registry path to change the RDP Port in Windows 2012

This is the registry path to change the RDP Port in Windows 2012

Find the “PortNumber” registry subkey and either right-click or double-click it. A box should pop that says “edit DWORD.” Find the value data (it should say 3389 for standard installations) and change it to the port that you would like. In this example, we chose port 1050.

This is the Port value field in the Windows Server 2012 Registry

This is the Port value field in the Windows Server 2012 Registry

Exit the registry editor

IMPORTANT: Before restarting your server, be sure that you have enabled your new RDP port on your Windows firewall. Take a look at our guide if you do not know how to add a custom firewall port “Adding a custom firewall rule.”

Restart your server

To access your server over the new port simply type in your IP  followed by :PORT (YOUR.IP.ADD.RESS:PORT /

Congratulations! You have just changed RDP port in Windows Server 2012. Thank you for following along in this How-To, check back with us for any new updates and to learn more about our reliable cloud hostingservices.

Question: How to Change the Default RDP Port on a Netgear Router

Remote Desktop Protocol, the specification used to define the operation of Remote Desktop Service, uses the transmission control protocol port 3389 by default. However, if your business uses Remote Desktop, consider changing the port configuration, as 3389 is a well-known port and is therefore vulnerable to hacking. Changing the RDP port in Windows entails altering the port forwarding settings in Netgear as well, as the router won’t automatically change the default RDP port to the new number.


Press “Start,” type “regedit” or into the search field and press “Enter” to launch the Registry Editor. You may have to press “Yes” if a message comes up warning you of the dangers of changing registry entries.


Double-click the following folders: “HKEY_LOCAL_MACHINE,” “System,” “CurrentControlSet,” “Control,” “TerminalServer” and “WinStations.”


Click “RDP-TCP.” Right-click “PortNumber” from the right pane and choose “Modify” from the context menu.


Click “Decimal” and enter a new port number into the Value Data field. Click “OK,” then close the Registry Editor and restart the workstation.


Open a Web browser, type “” (without quotes) into the address bar, then press “Enter.” If the browser fails to navigate to the address, try “” instead.


Type “admin” into the Username field and “password” into the Password field. Click “OK.”


Select “Port Forwarding/Port Triggering” from the left pane, then select the “Port Forwarding” radio button.


Select the RDP entry and click “Edit Service.” Enter the new port number into the Start Port and End Port fields.


Click “Apply” to change the default RDP port on the Netgear router.

Question: How to change the default RDP port 3389 through registry on windows server?

Please follow the steps to change the default RDP port through Registry:

1. Login to the windows server and start Registry Editor.

Click Start >> Run >> enter “regedit” and click Ok.

2. Locate and click on registry subkey mentioned below:

HKEY_LOCAL_MACHINE >> System >> CurrentControlSet >> Control >> TerminalServer >> WinStations >> RDP-Tcp >> PortNumber

3. Right click PortNumber, select Modify, and then click Decimal.

4. Type the new port number, and then click OK. 

5. Quit the Registry.

Once you change the port number of the server, restart the server 

Question: How to change the Terminal Server or RDP listening port ?

By default, Terminal Server, and the Remote Desktop Protocol uses TCP port 3389. It is sometimes useful to change the port not to conflict with other machines on the network.

To change the default port, follow the simple steps below:

1. Navigate to Start > Run, type: regedit

2. Navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\ Control\Terminal Server\WinStations\RDP-Tcp

3. Locate the “PortNumber” subkey and change the value from 00000D3D (hex for 3389) to some other port between 1025 and 65535.

Note: You will also need to configure your RDP/TS client to connect to the same port by specifying the port number with the IP address, as x.x.x.x:nnnn (where x.x.x.x is your RDP IP, and nnnn is the port number)

Question: How To Change Default Remote Desktop Protocol (RDP) port 3389 in Windows Server 2008?

Sometimes you need to change the Remote Desktop Protocol (RDP) from default 3389 to something else for better security.

To change the default port , there are two parts to be followed :-

A) First Open port in windows Firewall 

B) Change the port 

**If firewall is not enabled, just skip Part A


A) First Open port in windows Firewall 

1) Open Windows Exprloer, go to Control Panel\windows firewall, click Allow a programe through Windows Firewall. 

Click image for larger version

Name: Untitled1.jpg
Views: 1
Size: 46.6 KB
ID: 160267

2) Click Add port…, choose TCP, input Name and Port Number, click OK 

Click image for larger version

Name: Untitled2.jpg
Views: 1
Size: 42.2 KB
ID: 160268

3) Click OK in Windows Firewall Settings window. 

B) Change the port

1) Start Registry Editor.

On desktop select menu: Start > Run, input ‘regedit’ and click OK

Click image for larger version

Name: Untitled3.jpg
Views: 1
Size: 15.7 KB
ID: 160269

2) Locate the registry subkey for RDP port:

In Registry Editor find HKEY_LOCAL_MACHINE\System\CurrentControlSet\Contro l\TerminalServer\WinStations\RDP-Tcp\PortNumber 

3) On the Edit menu, click Modify, and then click Decimal.

4) Type the new port number, and then click OK. 

Click image for larger version

Name: Untitled4.jpg
Views: 1
Size: 39.4 KB
ID: 160270

5) Quit Registry Editor.

6) At-last Restart the server and you are done !! 

================================================== =====================

**Note :

If firewall is enabled , Please follow the part A first i.e 

Open the port in the windows firewall or if any other firewall enabled other than windows, before changing the default port.

or your server will be inaccessible if you follow the part B first! 

Question: Change RDP Listen Port

The default TCP port number for RDP is 3389.

This port can be changed using the registry editor:

  1. Start the Registry Editor.
  2. On the left side navigate to the following registry subkey:
  3. On the right side locate and double-click the value “PortNumber” in order to open the edit window
  4. Select the “Decimal” option, type the new port number and click OK.
  5. Quit Registry Editor and reboot your computer

If you want to connect to XP/VS Server over the internet you will have to set up a port forwarding on your internet router to forward the external ip address/incoming tcp port 3389 (RDP) of your internet router to the internal ip address/tcp port 3389 (RDP) of your internal XP/VS Server computer.

To connect over the internet you would have to enter the external ip address of your internet router which is reachable from the internet into your RDP client.

Your internet router will receive this request when you try to connect with your RDP client over the internet and will forward this request to the internal ip address of your XP/VS Server computer.

Like that you will be able to reach your internal XP/VS Server with an RDP client over the internet.

Question: Improve PC Security by Changing the RDP Port

PC security is comprised of effective firewalls, efficient anti-malware software, WPA and WEP codes as well as several other software-related tweaks and applications. When Remote Desktop is enabled, additional precautions must be taken to minimize the possibility of malware infection and hacking. If the tech at a software company can remotely operate your computer, then so can anybody else with the knowledge and ability. To protect against bots and script kiddies, the RDP Port must be changed.

The remote desktop protocol drives Remote Desktop Services through Port 3389 by default. Any Remote Desktop connections are made through Port 3389. This is the case for every user reading this unless you have already changed the port. Basically, this means that this port is an easy target. By changing the RDP port, security is enhanced because bots and kiddies are designed to target RDP Port 3389. Change the port!

For this to be truly effective, implement a strong account lockout policy. This defends against the use of RDP protocol to obtain the administrator password. If the password is attainable due to the absence of an account lockout policy, then the RDP Port can be found regardless of what it has been changed to.

Changing the default RDP port is achieved through a simple registry hack. Another method is to change the RDP port with a third-party utility. Always set a restore point before making changes to the registry.

The Registry Hack

Run regedit from the start menu to open the Registry Editor. Navigate to HKEY_LOCAL_MACHINE, SYSTEM, CurrentControlSet, Control, Terminal Server, WinStations and RDP_Tcp. Find the PortNumber dword and right-click.

rdp port

Select Modify. Alter the base to Decimal and enter the new port number with a value between 1025 and 65535, as long as the port is not in use. Click OK.

Question: How To Change The RDP Port (Remote Desktop) In Windows 10, 8.1 And 7

Here’s how to change the Remote Desktop Port (RDP) in Windows 10.  This also applies to Windows 8.1 and Windows 7.  Here are also the instructions if you are looking to add an additional Remote Desktop Port

Step 1

Open the Windows Registry (instructions)

Step 2

Browse to the following Registry Sub Key


Change RDP Port Number in Windows 10

Step 3

On the Edit Menu Click on Modify and Click on Decimal

Step 4

Type in the new Port Number and Click on OK

Step 5

Exit out of the Windows Registry Editor

Step 6

Restart the Computer and check.

Question: How To Change Remote Desktop Listening Port

By default, the Windows Remote Desktop service will automatically listen to TCP port 3389.

However, it’s perfectly fine to change or alter the default RDP listening port for any reasons that an administrator can think of. For example, to bypass Firewall that only allow web browsing but restrict Remote Desktop connection and others protocols.

In this case, you might need to change the default TCP 3389 to TCP 80 or 443 for the Remote Desktop service running on Vista Ultimate PC at home.

How to change the Remote Desktop listening port on Windows Vista?

This RDP trick is applicable to Remote Desktop service running on Windows Server 2003 and Windows XP as well (and likely working on Windows Server 2008 or later too)!

  1. Open up the Windows Registry Editor and browse to this Registry path:HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp
  2. Locate the PortNumber Registry key on the right-pane, double-click to open, click the Decimal option in the Base section, enter 443 in the text box and click OK (change 443 to the port number of your need).
    Take note that:
    • The new TCP port for Remote Desktop service must not currently in used. To confirm the TCP port 443 is free or unused, type
      netstat -an | find "443"

      at the Command Prompt window. If there is no output from the netstat command, meaning that the TCP 443 port number is not in used (and thus available for new RDP listening port).
    • If you’re not comfortable with Windows Registry Editor, you can simply copy and paste the following Console Registry Tool command (Reg.exe) to an elevated Command Prompt window in Windows Vista:
      You might need to download Reg.exe from Microsoft if it’s not currently in your Windows.
      REG ADD “HKLM\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp” /v PortNumber /t REG_DWORD /d 443 /f
    • To change back to the default, simply replace the PortNumber Registry key (in this case, TCP 443) to TCP port 3389.

How to restart Windows Remote Desktop service after changing its listening port?

There are at least two ways to enable/disable or restart Remote Desktop service – Group Policies or System Properties:

Using Group Policies (i.e. gpedit.msc)

  1. Click the Vista Orb, type gpedit.msc in the Start Searchtext box (Vista Instant Search) and double-click the “gpedit” in the Program list
  2. For Windows XP SP2:
    In Computer ConfigurationAdministrative TemplatesWindows ComponentsTerminal Services, double-click the Allows users to connect remotely using Terminal Services setting.

    For Windows Vista Ultimate:

    In Computer ConfigurationAdministrative TemplatesWindows ComponentsTerminal ServicesTerminal ServerConnections, double-click the Allows users to connect remotely using Terminal Services setting.
  3. Click Disable to deactivate Remote Desktop and then click Enable to reactivate the service again.

Using System Properties dialog box

If the “Allows users to connect remotely using Terminal Services” Group Policy setting is set to “Not Configured”, the “Enable Remote Desktop on this computer” setting (on the Remote tab of the System Properties dialog box) takes precedence. Otherwise, the “Allows users to connect remotely using Terminal Services” Group Policy setting takes precedence.

For Windows Vista computer:

  1. Click the Vista Orb, type system, locate the “System” shortcut in the Program list and double-click to open it
  2. Click the Remote Setting shortcut (require administrative privilege if UAC is turned on) in the Task pane (on the left)
  3. In the Remote Desktop section, select the “Don’t allow connection to this computer” option and click Apply button.
  4. Select either “Allow connections from computers running any version of Remote Desktop (less secure)” or “Allow connections only from computers running Remote Desktop with Network Level Authentication (more secure)” option and click Apply button – to reactivate Remote Desktop service to listen on new TCP port number.

For Windows XP SP2 computer:

  1. Right-click My Computer icon
  2. Select Properties option from the pop-up context menu
  3. Click on the Remote tab of System Properties dialog box
  4. In the Remote Desktop section, untick the check box that labelled “Allow Users To Connect Remotely To This Computer” and click the Apply button
  5. Tick the check box that labelled “Allow Users To Connect Remotely To This Computer” and click the OK button

Now, the netstat -an | find "443" will showing the TCP 443 port listening for RDP connection!

How to connect to a Windows Remote Desktop service that is not listening on the default TCP 3389 port number?

Open the Remote Desktop Connection client and specify the host:port syntax (e.g. Vista-Ultimate:443) as the connection string.

Remote Desktop port forwarding in Windows Vista using Putty SSH client.

Instruct the Windows Vista Remote Desktop Connection client to connect to localhost at TCP port 9999 (via SSH Port Forwarding) instead of the default RDP listening port

With reference to Microsoft Technet on Enable or Disable Remote Desktop and Microsoft Knowledge Base article KB306759 on How to change the listening port for Remote Desktop

Question: Change RDP 3389 port on Windows 2008 Server


To change the port you will need to:

1. Change registry at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp\PortNumber from 3389 to your port number

2. Allow your port number within Windows 2008 Firewall (and specify scope of IP addresses that can access the server via RDP – this is optional but good security practice).

3. Restart the RDP service or reboot the server

See pics for details:

Lunch regedit via Start > Run :

Modify PortNumber in HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp\ from 3389 to your custom number in our case to 1970:

Go to Manage Console:

Disable current Remote Desktop firewall rule:

Create new Rule:

Allow only one or few more IP addresses to connect via Remote Desktop:

It should look like this:

Restart Remote Desktop Service (plus dependent services):

OR you can just reboot the whole server 

In regards to security the setup is not security through obscurity but it prevents automated bots from discovering your servers open port and performing brute force password guessing on it. Also if you setup the scope properly with IP addresses or IP ranges the port will not even come up on standard port scan. There is nmap tool (now with GUI) that can sort of deduce the port but it is still useless unless you are expert network penetration expert. 

As far as maintaining the information about the change of port I recommend you look into NOC software like Nagios. This will tell other admins about what you have done, what the port number for RDP is and who is allowed to access it. 

All done!  

Question: Changing the listening port for Remote Desktop

Changing the listening port for Remote Desktop: Remote Desktop is a very important feature of Windows which allows users to connect to a computer in another location and interact with that computer as if it’s present locally. For example, you are at work and you want to connect to your home PC then you can easily do if RDP is enabled on your home PC. By default, RDP (Remote Desktop Protocol) uses port 3389 and since it’s a common port, every user has information about this port number which can lead to a security risk. So it’s highly recommended to change the listening port for Remote Desktop Connection and to do so follow the below-listed steps.

Changing the listening port for Remote Desktop

Make sure to create a restore point just in case something goes wrong.

1.Press Windows Key + R then type regedit and hit Enter to open Registry Editor.

2.Navigate to the following registry key:


3.Now make sure you have highlighted RDP-Tcp in the left pane then in the right pane look for the subkey “PortNumber.

4.Once you have found PortNumber then double-click on it to change its value. Make sure to select Decimal under Base to see the edit its value.

5.You should see the default value (3389) but in order to change it’s value type a new port number between 1025 and 65535, and click OK.

6.Now, whenever you try to connect to you home PC (for which you changed the port number) using Remote Desktop Connection, make sure to type in the new port number.

Note: You may also need to change the firewall configuration in order to allow the new port number before you can connect to this computer using Remote Desktop Connection.

7.To check the result run cmd with administrative rights and type: netstat -a

Add a custom inbound rule to allow the port through the Windows Firewall

1.Press Windows Key + X then select Control Panel.

control panel

2.Now navigate to System and Security > Windows Firewall.

3.Select Advanced Settings from the left-hand side menu.

4.Now select Inbound Rules on the left.

5.Go to Action then click on New Rule.

6.Select Port and click Next.

7.Next, select TCP (or UDP) and Specific local ports, and then specify the port number which you want to allow connection for.

8.Select Allow the connection in the next window.

9.Select the options which you need from Domain, Private, Public (private and public are the network types that you select when you connect to the new network, and Windows ask you to select the network type, and the domain is obviously your domain).

10.Finally, write a Name and Description in the window that shows next. Click Finish.

Question: How to change default RDP (Remote Desktop Connection) port in Windows Server

Follow below steps to change default RDP port on Windows server:

By default, Remote Desktop Connection (RDP) uses port 3389. we can change this default settings for security reasons.

* Login to the server via RDP(Remote Desktop Connection).

* Open the port you wish to use for RDP connection in firewall.

Code: [Select]

* Click on “Start”>>Go To “Control Panel”>>Click on “Windows Firewall”
* Click on “Exceptions” tab .
* Click on “Add Port”.
* Specify the required details like Name, Port Number and choose TCP.

* Open Registry Editor.

Code: [Select]

Click on “Start”>>Go To “Run”>>Type “regedit” (without quotes) and click OK.

* Locate and then click the following registry subkey:

Code: [Select]


* On the Edit menu, click Modify, and then click Decimal.

* Type the new port number, and then click OK.

* Exit from Registry Editor.

* Restart the server to make the changes in effect.

* Verify connectivity to the new port using telnet.

Question: How to Change Windows VPS Default RDP Port?

Open Regedit in Windows RDP

How Can I Change my Default Remote Desktop Port?
Step 1: Login to your Windows VPS RDP Account, Go To “Start Menu” and then type “regedit.exe” (without quota) in search box and then click on regedit.exe result.

Step 2: In Registry Editor(Regedit), Locate and then click the following registry subkey:

Windows RDP Port Modification
Windows RDP Port - Firewall


1. On the Edit menu, click Modify, and then click Decimal.
2. Type the new port number, and then click OK.
3. Exit from Registry Editor.

Allow your new custom RDP Port in Windows Firewall

Step 3:  As you can see, we changed our default RDP Port from 3389 to 3344 but now we need to add our new custom port in Firewall Inbound TCP Rule.

1.  Go to Start Menu, Type “Firewall” and then click on “Windows Firewall with Advanced Security
2.  In Windows Firewall, Click on “Inbound Rules“(Left Side)  and then click on “New Rules” (You can find this option In Right Side)

In “New Inbound rule Wizard” Click on “Port Option” and then click on “Next” button.

4.  In Protocol and Ports,Select “TCP” and Enter your new RDP Custom Port number in “Specific Local Ports” and then click on “Next” button 

Again click on “Next” Button

6.  Enter Firewall Rule name like “My Custom RDP Port”, Click on “Next” Button and Close Firewall Application and then Restart your VPS. 

How can I access my Windows RDP with my New Custom Port?
After restarting your VPS, Wait for few minute and then open Remote Desktop Application on your Local PC and If your VPS IP Address is and your new custom RDP Port is 3344 then enter (IP:Port) in Computer Field and then click on “Connect” Option.

Question: How to Change Windows VPS RDP/RDC Port & Allow New Port?

Changing a Default RDP Port to other port is recommended for your VPS Security, My English is good so I will add some images so you can easily understand this.

Only for Windows 2008 R2

Step 1:

How to Change Default RDP Port?

Login to your VPS Via RDP(Because VNC is slow so I recommended you to use RDP) and go to Start Menu and type “regedit.exe” in search box(without quota)

(See Screenshot)

In Registry Editor, Go to:



1. Now right click on “PortNumber” entry, 

2. Click on “Modify” option and then Select “Decimal” and change value from “3389” to other port number like 3399 

3. Press “OK” and Exit from Registry Editor.

(Screen Shot)


Allow your New RDP Port to Firewall?

1. Go to Start Menu, Type “Windows Firewall” (without quota) and open “Windows Firewall with Advanced Security”

(Screenshot – Open Firewall)

2. In Firewall, Click on “Inbound Rules” and then click on “New Rules”


3. Select “Port” in list and then click on “Next”

(Screenshot – Select Port – Stp 1)

 TCP is Selected by Default, We just need to enter our new RDP Port in “Specific Local Ports” so enter your new Port 3399(or your new other port) and then click “Next

(Screenshot – Step 2)

5. We need to Select “Allow the connection” and then press “Next” button.

6. Click on “Next” Button without changing anything.

7. In name field enter “Remote Desktop Port” and then Press “Next” button.


8. Exit from Firewall and Reboot your VPS/Operating System.

After few minute, When your VPS will become online then you will be able to access your RDP Account by using your IP: PORT in RDP Application.

Eg: 233.233.233.x:3399 (Just for example)

(Screen shoot

(Tip: Do not reboot your VPS without allowing your new port in Firewall otherwise you will be unable to access your RDP until you fix this via VNC/Console)


Question: Change the Remote Desktop Connection port to your Windows Server

Windows servers are remotely accessible with Remote Desktop via the TCP 3389 port (default port). In some situations, as when you wish to obtain a more secure environment, changing the remote access port can be useful. This article explains how to proceed.

Important note: In order to maintain the access to your server after you change the access port, make sure your firewall is properly configured. To be safe, request a KVMIP or a virtual console if you are making the change for a virtual server.

The instructions below apply to machines under Windows Server 2012, 2008 R2, 2008, and 2003. Follow these steps:

  1. Connect to your server via Remote Desktop
  2. Click simultaneously on the Windows logo + R to open the “Run” dialog and execute the “cmd” command
  3. Open the registry editor by typing the “regedit” command
  4.  Search for this registry subkey: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp\PortNumber
  5. Double-click or right-click on the “PortNumber” registry subkey, select the decimal base and type the port number of your choice (the default port is 3389, in this example, we selected port 3390). Click on “Ok” to save your selection.
  6. IMPORTANT: Make sure that remote access to your server through the new port is authorized in your Windows firewall before executing the next step.
  7. Exit the registry editor
  8. Restart your server

 After the reboot, specify the Remote Desktop port number. 

Question: How to modify RDP listening port on Windows Server 2012

RDP, known as the Remote Desktop Protocol, is a proprietary Microsoft protocol that is responsible for enabling remote desktop connections to a server. This protocol is highly customizable and its configuration can be edited to increase both its security and flexibility through options such as limiting the number of possible concurrent connections or changing the listening port. This second option, modifying the listening port (where new connections communicate with the server) for RDP, is useful as it can enhance your security setup in a very quick and easy way.

In order to modify the listening port for RDP on your Windows 2012 server, we have put together a short guide that will explain the configuration.

Getting Started

To complete this tutorial, you will need:

• 1 Windows 12 RDP Server (Cloud Server or Dedicated Server)


Before you proceed, you should note that if you want to change your RDP listening port, you must allow connections to the new port in your Windows Firewall. If you do not do this, you will be locked out of your server. Once you have allowed the new port number in your Windows Firewall and other firewalls on your system, you can continue with the guide.

Begin by opening an administrative session on the Windows 12 RDP server. Click the Start button. In the field that pops up, you will need to enter the following text and execute:


The command above will open up the Registry Editor utility window. This Microsoft tool is the central point for making tweaks to how your Windows 12 system runs, and is the location of the port number value for RDP.

With the window open, browse to the following view:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp

Once you have the RDP-Tcp view open, find the PortNumber field. This field contains the value of your current RDP listening port. For our tutorial, we will change it to 9998 as our new listening port. You can select a different port number, but ensure beforehand that your chosen port number is not already in use on the system and, as a security bonus, is not a commonly-used port number. Edit the PortNumber value to your desired port number:


Next, you will need to restart the remote desktop connection service in order for the port number changes to be implemented. This service, known as Remote Desktop Services can be reloaded within Control Panel. Open Control Panel now.

With Control Panel open, navigate to the following views:

System and Security > Administrative tools > Services

In the services list, you will need to find Remote Desktop Services. Right click on it to bring up the option menu, and click on Restart.

While the remote desktop connection service restarts itself, note that any RDP connections currently on will be disconnected. After RDP is running again, you will be able to connect using the new listening port you set.


Congratulations! You have successfully changed the value of the listening port for RDP on your Windows 12 server. If you liked this easy way to enhance your security setup, share it with your friends.

Question: Modify and Change Remote Desktop Listening Port

How to Change the Listening Port for Remote Desktop

Microsoft has a Knowledge Base article KB306759 that details how to modify and change the Remote Desktoplistening port by changing registry value.

  1. Start Registry Editor by clicking on Start -> Run, and type in regedit in the Run text box, and then press Enter or click OK.
  2. Navigate to the following registry branch/subkey:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp
  3. Locate the registry entry PortNumber in the right pane.
  4. Right click on PortNumber and choose Modify (or select PortNumber, then click on Edit menu and select Modify).
  5. On the Edit DWORD Value window, click on Decimal.
  6. Type in the new port number on the Value Data text box.
  7. Click OK when done.

Question: Changing the Remote Desktop Port

The default port that the Windows Remote Desktop Connection program listens on is 3389. However, you can change to any port you wish; maybe you find public networks block everything except port 80 and 433 (in which you could set it to 433) or you feel changing to a non-default port is better security wise. Whatever your case, you must edit the Windows Registry:


Key: \System\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp\

Name: PortNumber


Value: desired port number in Decimal format

Then if you use the Windows Remote Desktop Connection program to connect to the computer, follow the address you want to connect to with a colon and the port number that you changed to; for example

Question: How to change rdp server port 3389 to 80?

You can change the listening port by tweaking the registry.

To change the port that Remote Desktop listens on, follow these steps: 

>Start Registry Editor.(Start–>Run–>’regedit’)

>Locate the following key in the registry: 



>On the Edit menu, click Modify, and then click Decimal.

>Type the new port number, and then click OK. 

The same information is found in MS KB article 306759

Question: Change RDP port on a Windows Server

Windows servers are remotely accessible with Remote Desktop via the TCP 3389 port (default port). In some situations, when a more secure environment is needed, changing the remote access port can be useful. This article explains how to change the RDP port on a Windows Hosting Server.

Note: Make sure you have opened remote access to the new RDP port in Windows Firewall before starting to avoid locking yourself out of the server.

WARNING: Be careful when making changes to the Windows Registry as it contains critical co