L2 Computing - Unit 3 - Computer Hardware Systems and Networks

Relevant LINKS

BACK TO COMPUTING UNITS

Handbook home page

Overview

Computer Science at Gold Level requires the candidate to use appropriate hardware and networking systems to enhance projects that involve programming . As a result of reviewing their work, they will be able to identify and use automated methods or alternative ways of working to improve programming and using computers. They will also be aware of issues relating to intellectual property rights.  Unfamiliar aspects will require support and advice from other people.

A work activity will typically be ‘straightforward or routine’ because:

The task or context will be familiar and involve few variable aspects. The techniques used will be familiar or commonly undertaken.

Example of context – designing, planing, implementing and testing a basic program for controlling a physical system.

Support for the assessment of this award

Example of typical Computing work at this level (Coming Soon)

Assessor's guide to interpreting the criteria

General Information

QCF general description for Level 2 qualifications

  • Achievement at QCF level 2 (EQF Level 3) reflects the ability to select and use relevant knowledge, ideas, skills and procedures to complete well defined tasks and address straightforward problems. It includes taking responsibility for completing tasks and procedures and exercising autonomy and judgement subject to overall direction or guidance.
  • Use understanding of facts, procedures and ideas to complete well defined tasks and address straightforward problems. Interpret relevant information and ideas. Be aware of the types of information that are relevant to the area of study or work.
  • Standards must be confirmed by a trained Level 2 Assessor or higher
  • Assessors must at a minimum record assessment judgements as entries in the online mark book on the INGOTs.org certification site.
  • Routine evidence of work used for judging assessment outcomes in the candidates' records of their day to day work will be available from their eportfolios and online work. Assessors should ensure that relevant web pages are available to their Account Manager on request by supply of the URL.
  • When the candidate provides evidence of matching all the criteria to the specification subject to the guidance below, the assessor can request the award using the link on the certification site. The Account Manager will request a random sample of evidence from candidates' work that verifies the assessor's judgement.
  • When the Account Manager is satisfied that the evidence is sufficient to safely make an award, the candidate's success will be confirmed and the unit certificate will be printable from the web site.
  • This unit should take an average level 2 learner 40 guided hours of work to complete.

Assessment Method
Assessors can score each of the criteria N, L, S or H. N indicates no evidence and it is the default setting. L indicates some capability but some help still required to meet the standard. S indicates that the candidate can match the criterion to its required specification in keeping with the overall level descriptor. H indicates performance that goes beyond the expected in at least some aspects. Candidates are required to achieve at least S on all the criteria to achieve the full unit award.

Once the candidate has satisfied all the criteria by demonstrating practical competence in realistic contexts they achieve the unit certificate. Candidates that meet the requirements for all units achieve 30 marks and are then eligible to take the grading examination. Grades from the exam are A*, A, B, C. No grade can be awarded at level 2 without taking the examination with at least 40% of the marks coming from the examination.

Expansion of the assessment criteria

1. The candidate will understand computer hardware

1.1 I can describe the function of the main hardware components in computing devices

Candidates should be able to describe the function of the CPU, USB ports, audio and video ports (HDMI), RJ45 network ports, SD Cards, Memory modules, USB memory, hard drive, keyboard, mouse, display for commonly used computer devices.

Evidence: from internal testing and documentation in portfolios.

Additional information and guidance

Candidates should be familiar with commonly used hardware components through hands on use. Raspberry PI, building a PC or taking apart disused machines will all provide appropriate experience. There should be descriptions in the portfolio of the components using diagrams - a good exercise for using a vector drawing program such as Inkscape. Work here can be related to criteria on interoperability, standards and costs of specific components to make it real. A focus on building their own PC could be a worthwhile context  or making a really objective comparison between Smartphones. If you don’t understand what is in them you are not in a strong position to say why one is better than another apart from aesthetics and subjective considerations.

1.2 I can explain performance criteria for key components

Candidates should be able to explain CPU processing power and heat generation/energy consumption, Storage cost/byte and speed of access, data transfer rates as key performance issues.

Evidence from internal testing and portfolios.

Additional information and guidance

Candidates should know that CPU processing speed is complicated! To begin to understand it they need to know the prefixes, Mega - Giga and Tera. Mega is 1 million, Giga 1000 million and Tera a million million or 1000 Giga. They will probably be familiar with terms like Megabytes and Gigabytes. Now they need to know the significance of these prefixes - they need this for their work in science and maths too so talk to colleagues in those departments.

In its simplest form digital processing is done by taking instruction codes and data one at a time and passing them through the CPU. A clock is used to keep this sequence ordered. A program is effectively a set of instructions and associated data that are clicked through the processor in time with the clock. The faster the clock goes the faster the program runs. The first microcomputers ran with clock speeds of about 1 MHz. That is 1 million clicks per second. (1 Hz is one click per second) If an instruction could be processed each clock tick we can then process 1 million instructions per second. So two factors are involved with the speed the processor can execute the program. The first is how fast the clock can run and the second is how many clock ticks are needed on average for each instruction. For this reason simply comparing the clock speed is not necessarily going to be a fair comparison unless the processors are identical in all other respects (a good illustration of the need for control of variables in fair tests to link to the science curriculum) Why not just make the clock speed higher and higher? Firstly the smaller the component size in the processor the faster the clock speed can be. Small things can generally vibrate faster than large things. Small tuning forks have a higher pitch than large ones because the prongs vibrate faster. As designers have been able to make smaller and smaller components on silicon, they have been able to raise the processor clock speed. Unfortunately higher frequency generates more heat and it gets harder and harder to remove the heat. This heat also has to come from an energy source and while that is easy if it is mains electricity it is a big problem if it is from a battery because the battery will have to be big or it will quickly go flat. This is why completely different and more energy efficient CPU designs are needed in Smartphones compared to desktop computers. Clock speeds now are typically around 3 GHz, around 3000 times as fast as those early microcomputer clocks at 1
MHz.

Clock speed is not the only factor. In the early computers the CPUs were 8 bit. This meant that they could only cope with a 1 byte instruction at a time. Handling any numbers bigger than 255 required at least two clock cycles. Now CPUs are 64 bit so in theory they could hold eight 8 bit instructions all at the same time. In 64 bits in theory 8 times as much could be processed per clock cycle as in an 8 bits so apart from the speed of the clock ticks, the number of bits in the CPU registers matter too. In addition to the number of bits in a single CPU we now have
processors with multiple cores. This is like having more than one CPU. 4 cores is currently common with 8 cores becoming more common. This enables processor
manufacturers to increase processing power without increasing the clock speed and therefore hitting heat problems. In addition there is a limit to the size of
components in the processor itself so we can’t keep making them smaller in silicon forever.

In general purpose devices, raw processing power is a lot less important than it was. In mobile devices power consumption is probably more important. You can’t have a massive processor with a heat sink and fan in a smartphone or slimline tablet.

In summary, all of the following can have an effect on the speed of execution of a program. The CPU clock speed, the bit-width of the registers in the CPU ie 8 bit, 16 bit, 32 bit, 64 bit etc. The number of cores in the CPU, the way the CPU is structured and the design of the software itself.

There is now the issue of storing programs and data before and after it is processed. If we could have storage space that could run at the same clock speed as the processor it would make things relatively simple. In one clock tick the processed data is stored and its job done. Unfortunately any storage capable of running at that speed would be extremely expensive. There is therefore a trade off between capacity and speed. A hard drive can store terabytes of data but in simple terms can only transfer about 1 Gigabit per second. The processor can process 64
bits at a 3 Ghz clock speed. That is about 200 times faster so if the hard drive was directly coupled to the processor the processor would be constantly waiting for the hard drive. (If your desktop computer ever runs out of RAM and starts using the swap file you will see the effect) To get round this problem there are smaller faster (and more expensive) storage steps in between. RAM is where programs are loaded for actual execution then between RAM and the processor are caches which are smaller bits of fast memory to hold parts of programmes that are currently being executed. Writing software loops so they fit entirely in Cache will make the program run much faster and more efficiently. Data is moved between storage devices by buses. A data bus is effectively a set of parallel wires 8, 16, 32, 64, 128 etc bits wide. A 64 bit bus can transfer 8 bytes at the same time so for the same clock speed it would be 8 times as fast as an 8 bit bus. It is easy to see from this that the binary number system that generates the numbers 2,4, 8, 16, 32, 64, 128, 256 etc is of fundamental importance in computer technologies.

The average candidate is not expected to know all of the quantitative details presented here. They should understand the basic principles of trade offs between cost, speed and power consumption. They should be familiar with the names CPU, register, Bus, RAM, Cache, Hard drive, Clock, and the prefixes Kilo, Mega, Giga, Tera used with bits, bytes and Herz.

1.3 I can relate computer hardware to computational thinking

Candidates should be able to relate hardware speeds and capacities to binary numbers, algorithms and abstractions.

Evidence: from internal tests and content of learner portfolios.

Additional information and guidance

Candidates should be able to recognise that for example, memory and bus speeds have capacity patterns related to the numbers 8, 16, 32, 64, 128, 256, 512, 1024. 1k is actually 1024 not 1,000. They should be able to see the link between clock ticks or cycles and processing the instructions in an algorithm. Each iteration will take a number of clock cycles and passing data through the system means synchronising the clocks of the various components. The actual details of the hardware electronics is very, very complicated but we can abstract simplified versions so we can do things without knowing all the details. The program they write has to go through the CPU as 1s and 0s but they only need to deal with an less complex bigger scale system of key words and numbers that are much more what they are used to seeing.

 

2. The candidate will understand the role of network servers

2.1 I can describe a server in terms of its functions

Candidates should be able to appreciate that servers serve clients with access to centralised resources and information.

Evidence: local testing, portfolios

Additional information and guidance

Candidates should know that there are a wide variety of server services that can be provided to network clients. These range from serving web pages to managing email. The services could be provided by a single machine or they could be spread over many machines. Racks of blade servers are now common. Since servers generally work automatically they don’t need individual screens, keyboards, cases, power supplies etc. This reduces costs and space. Peer to peer networks transfer data directly between users and don’t need servers although in practice a network can have server(s) and peer to peer functions too. File sharing between peers became very popular with Napster for sharing music but ran foul of copyright law.

2.2 I can explain the performance criteria for servers

Candidates should know that most of the criteria that apply to computers in general also apply to servers. Getting data into and out of servers and running many programs concurrently are of particular relevance to servers.

Evidence: From local testing and portfolios

Additional information and guidance

The main differences between the needs of servers and the needs of a general purpose computer are related to multiple users. In general a server has three key functions for clients.

  1. It can host storage for the clients programs and information, eg Drop box on a cloud server or space in Google Drive.
  2. It provides shared information or access to shared information eg You tube servers, Wikipedia server.
  3. It provides processing services on demand to its users over the network, eg

Google search engine, Google Spreadsheet. All of this means in general that servers usually need more storage and more space (RAM) to run programs but there are specialist tasks that will make some aspects more important than others.

Another trend is to spread the load over several servers so racks of servers or server blades is a common method for internet servers. For local networks it is more common to find the server as one powerful computer hosting the accounts of many clients with other servers such as mail and web proxy as separate boxes but Page 125there is a lot of variation. Cloud computing is growing and it could well be that local servers become increasingly rare. It is a lot less expensive to manage very complex servers centrally rather than having that complexity replicated in every locality. The main limit to this technically is simply getting sufficient bandwidth to run all the services reliably.

The performance criteria for servers depends largely on the use but speed of data in and data out, processing power, RAM and storage are the main considerations. Which of these is the limiting factor depends very much on the particular application. A good exercise for candidates would be to research eg the difference between rack servers and blade servers. There is plenty of scope on the web for material for this.

2.3 I can explain backup strategies for servers

Candidates should be able to explain backup strategies and why they are necessary.

Evidence: From assessor observations and portfolios

Additional information and guidance

Digital information can be lost if the hardware devices fail. Information in RAM is lost as soon as power is switched off. Hard drives and solid state storage such as SDcards can store information without power but they can still be physically destroyed. For this reason information on servers is copied so that if the server fails there is always a backup somewhere else. The snag is that the amount of data is massive and copying it takes time. To copy an entire drive on a desktop computer can take hours. To copy large server storage shared by perhaps millions of users could take days or more. The fastest way to backup data is to transfer it by the fastest connection possible and that means having it not too far away. Unfortunately backups that are in the same physical place as the original are vulnerable to eg fires or other similar disasters. Any backup strategy has to take into account the risk and the circumstances. Candidates should know:

  • Full System Backup which saves every file and directory that is stored to the server.  Since it takes a while to perform such a sizeable transfer of files, it is generally done on only a weekly or monthly basis. Full backups are the best in data security, as they create a duplicate of the server, which allows the server to be completely restored.
  • Incremental Backup saves only the files that have been changed since the previous backup of your server. This takes considerably less time and requires less space than a full backup. Differential Backup
  • A Differential Backup is similar to an incremental backup but saves only the files that have had changes since the last full backup. A differential backup takes longer than an incremental backup and uses more disk space. The same data may be backed up more than once. It does provide a safety net in case a file was somehow missed during a previous backup.

One issue with backups is that a hardware failure would take the system off-line while the hardware was fixed and if a backup had to be restored it would then take a long time to get back up and running. Solutions to this are RAID 5 and 6 where data is distributed across several discs in such a way that if one (or 2 in the case of RAID 6) fail the system slows a little but can carry on until the data is rebuilt across a new replacement disc. Of course if the RAID controller hardware fails all of the discs will stop until that component is fixed. So the ultimate would be to have one or more identical servers so that if one failed the other could carry on. When you have a million servers, distributing multiple copies of data across them and managing it automatically means you can have entire server failures and the system will not be affected. It also means you can remove servers and add new servers and let the system rebuild itself. This is the type of strategy for very large service providers such as Google.

3. The candidate will be able to understand network design related to performance

3.1 I can describe network design features

The candidate should be able to describe some particular design features of networks that make them fit for purpose.

Evidence: Portfolios

Additional information and guidance

Candidates should be able to relate general concepts such as bandwidth, interpreted as capacity for data transfer, to a network design. They should appreciate that if data traffic from hundreds of clients all goes from them to point A and then from point A to point B through a single connection increasing the bandwidth between them and point A might have very little effect. In the best case scenario where they are the only user it will make a difference, but in the worst case where they are sharing the link between point A and B with many others it
won’t. One of the goals is to achieve transferable concepts and speed and efficiency of data transfer is an important issue in all aspects of digital technologies. It is usually a trade off between speed and cost. Fibre optics has huge capacity but is relatively expensive. Wireless has relatively low capacity but is inexpensive and very flexible. Copper cable is somewhere in between.

Switches and routers are the key components in networks. They channel data to where it needs to go as efficiently as possible. In Ethernet networks (by far the most common local networking system) the aim is to reduce “collisions” between packets of data in the network. A collision is the result of two devices on the same Ethernet network attempting to transmit data at exactly the same time. The network detects the "collision" of the two transmitted packets and discards them both and they have to re-transmit. This can slow the network down, especially bad for things like streaming video where continuity is important.

Distance is another key issue for networks. In local area networks, 100m is a key distance because the UTP copper cable is designed to be an inexpensive way of carrying data that distance. As long as there is a switch between 2 100m runs, everything is ok because switches act as repeaters, repeating the transmission as if it was a new signal. Longer connections can be done with fibre optic cable which can go for miles without needing signal boosting but it is a lot more expensive. Wireless is increasingly used. It generally supports lower bandwidths and the speed of data transfer depends on distance from the wireless access point. Usually the wireless access point is connected to a switch using UTP cable. With multiple
wireless access points on a site with many mobile users it is easy to see how the load could become unbalanced eg by a lot of people connecting to one access point and none connected to another. It is possible to get access points that are intelligent and can optimise connections. For example, although the bandwidth falls away with distance it might be better to connect to an unoccupied access point further away if the nearest one is saturated with connections. Wireless is at the heart of cell phone networks. Typically these connection speeds are much lower
than wifi and if available a wifi connection is used in preference. Nevertheless it is perfectly possible to stream video to a cell phone over a 3G connection, it just depends on how far you are from a transmitter and how many other people are using it. Cell phone access points can be tens of miles apart or a a few hundred metres in Cities. It is fairly clear that distance and bandwidth are trade offs for wireless.

Practical networking components are inexpensive. To provide practical networking experience candidates should be given some hands on experience. This can be done inexpensively using older computers or RaspberryPI type machines.

Candidates can use the internet to find information and ask questions to get a simple working network. They would need say 1 RaspberryPi to act as the server, an ethernet switch and some UTP cable with one or more Pis as clients. Once they got that working they could try adding eg a wireless access point.

3.2 I can explain component choice based on cost and performance

The candidate should be able to relate component choice to the fitness for purpose rather than simple all out performance since costs can vary a lot.

Evidence: From portfolios.

Additional information and guidance

Candidates are not required to know the cost of specific items but should understand principles that might be best learnt through specific case studies. “Future proofing” is as much a marketing term as it is a useful reason on its own for buying top end equipment. Take simple cable as an example.

Cat 5e £70 - 305m
Cat 6 £112 305m
Cat 6a £283 - 305m
Fibre optic £460 - 250m

A salesman says Fibre optic is future proof use it to connect a room with 40 clients to a 48 port switch 50 metres away.

So we need 2000m of cable.

Do it with CAT 5e and it’s 7 reels of cable at a cost of £490
Do it with fibre and it is 6 reels of cable at a cost of £3680

There is also the issue of connectors. Cat 5e uses RJ45 plugs which cost pence each. To convert fibre to a UTP switch connector is about £30 per connection so add 40 x 30 =£1200.

So the fibre optic route is around 10 times more expensive. What would the gain be?

At speeds up to 1 Gb, exactly nothing! Remember your broadband internet connection is probably around 20 Mb so 50 times less than this and it can stream high quality video. Where you need possibly higher than 1 Gb is to connect the server to the whole network and even then UTP cable can be used. The main reason for fibre is distance and going outside. Why outside? Lightning strikes! Copper is a conductor, fibre optic cable isn’t. More and more wireless is replacing desktop connections. Why? more flexible and less expensive even though less
bandwidth. You can buy a 300 Mbit access point for under £50 then 1 50m UTP connection to your switch which of course then needs only 1 not 48 ports. Question is will one access point be enough for 40 users? On wireless N its 300 Mbits so in principle about 8 per user. Not huge but it really depends on use. If all 40 try to stream video at once it could be a problem but if that video is coming from the internet, probably the server connection to the internet will be unable to cope anyway. If it is intermittent use of 40 users who are mainly doing some web
searches, typing some text and reading e-mail it will probably be fine. Since £50 is worth a gamble, try it. If it works you just saved several thousand! You could of course have a few cable connections for those that have particular priority for high bandwidth and the rest on wireless.

It would be a good exercise to get candidates to use the web to research such scenarios and decide what actions to take to get best value. Remember its a salesman’s job to get you to spend as much as possible, its your job to make sure you spend the least needed to get the job done.

3.3 I can explain how networks communicate to transfer data

The candidate should understand the concept of protocols to enable data transfer.

Evidence: From portfolios

Additional information and guidance

Please refer to the Level 1 guidance. TCP/IP is the protocol at the heart of the internet. It stands for transmission control protocol and internet protocol. it is another example of abstraction - a simplification of some very complicated technological details to implement it in the physical world. The TCP/IP protocol is divided into 4 layers

1. Application layer
2. Transport layer
3. Network layer
4. Data link layer

Each of these is an example of an abstraction from the actual technology.

The applications layer has a number of protocols related to it, for example:

1. HTTP (Hypertext transfer protocol)
2. FTP (File transfer protocol)
3. SMTP (Simple mail transfer protocol)
4. SNMP (Simple network management protocol)

HTTP will be most familiar as it is used for web page requests from web servers.

TCP is the key protocol for the transport layer. It divides the data (coming from the application layer) into chunks and then passes these chunks onto the network. It acknowledges received packets, waits for the acknowledgements of the packets it sent and sets time-out to resend the packets if acknowledgements are not received
in time.

IP is the key protocol for the network it routes data over the network and between connected networks to get it ot the intended destination. The data link layer is made up of device drivers in the OS and the network interface card attached to the system. Both the device drivers and the network interface card take care of the communication details with the media being used to transfer the data over the network, cables, wireless etc.

4. The candidate will contribute to good network security and safety

4.1 I can describe features of a good acceptable use policy

The candidate should be able to describe features and say why they are important.

Evidence: From portfolios

Additional information and guidance

An appropriate exercise would be for candidates to research acceptable use policies on the internet and identify common features. Then come to an agreement about which features are essential, which desirable and which not needed and why.

4.2 I can describe the features of a straong password

Candidates should know the characteristics of a strong password and why they make the password strong.

Evidence: From portfolios

Additional information and guidance

See the notes for level 1. At level 2 candidates should be able to relate the characteristics of a strong password to work on binary numbers and the number of possible combinations presented to someone trying to hack in.

4.3 I can describe a method of data encryption

The candidate should be able to describe a simple method of data encryption.

Evidence: Internal testing, portfolios.

Additional information and guidance

Please refer to the level 1 notes. The difference between a Level 2 candidate and a level 1 is that the Level 2 will be able to describe a method whereas at level 1 they are only expected to be able to identify examples having taken part in some encryption activities.

4.4 I can identify examples of unsafe practice on networks

Given practical working scenarios, the candidate should be able to identify unsafe practices.

Evidence: from assessor dialogue and portfolios.

Additional information and guidance

Typically the candidate should be able to identify actions such as making their home location or immediate location publicly available is potentially unsafe. They should not take conversations with strangers on a network on face value and they should not arrange to meet anyone they meet through network communications alone. They should use strong passwords and keep passwords secure and understand the implications of identity theft. Illegal activities such as hacking networks can also be considered unsafe practice.

Moderation/verification
The assessor should keep a record of assessment judgements made for each candidate and make notes of any significant issues for any candidate. They must be prepared to enter into dialogue with their Account Manager and provide their assessment records to the Account Manager through the online mark book. They should be prepared to provide evidence as a basis for their judgements through reference to candidate e-portfolios and through signed witness statements associated with the criteria matching marks in the on-line mark book. Before authorizing certification, the Account Manager must be satisfied that the assessors judgements are sound.