Your AI powered learning assistant

Infrastructure Services: The Backbone of an Organization | Google IT Support Certificate

What are IT Infrastructure Services?

00:00:00

Infrastructure Services That Keep Organizations Running Infrastructure services enable internet connectivity, manage networks, and run server-based functions that let a company operate. Corporate networks require defined subnets, IP assignment via static settings or DHCP, configured networking hardware and wireless, and reliable DNS. Small firms may have one person owning all services, while larger organizations split them across specialized teams. The focus is on physical servers and the networking that keeps them connected, with cloud options available when offloading makes sense. The objective is to know which services to deploy and how to integrate them into an organization.

Choosing Cloud Models Instead of Managing Hardware Instead of buying and patching hardware, Infrastructure as a Service provides preconfigured virtual machines that behave like physical servers, with major options including AWS EC2, Linode, Azure, and Google Compute Engine. Networking as a Service offloads routing, WAN and intranet setup, security, and costly hardware. Software as a Service replaces per-machine installs with browser-based suites like Microsoft Office 365 and Google’s G Suite, while Platform as a Service bundles code runtimes, databases, and web serving through offerings like Heroku, Azure, and Google App Engine. Directory services can also be hosted in the cloud as Directory as a Service to centralize users and computers. Cloud services are widely used but introduce recurring costs and provider dependence, so understand how each service works before adopting.

Run Services on Dedicated Server Operating Systems Services should be installed on dedicated server operating systems, not user editions. Server OSes are optimized for high connection counts, larger memory, security hardening, and built-in services. Common choices include Windows Server, Ubuntu Server and other Linux server editions, and macOS Server. You can buy or rent server hardware and run it on-site or at another location, managing it end to end.

Virtualization Consolidates Services and Simplifies Maintenance Services can run on dedicated hardware for maximum performance or as multiple virtual instances consolidated on one physical server to better utilize resources. Virtualization lowers cost, simplifies maintenance through quick moves and stops, and eases recovery from hardware issues, while redundancy mitigates single points of failure on physical machines. Choose based on performance needs, budget, maintenance windows, and failure tolerance. In Qwiklabs, starting a lab creates a fresh Google Cloud Console account, provisions VMs with specified CPU, memory, operating system and extra disks, lets you access them via SSH or Remote Desktop, and destroys them afterward to return resources to the pool.

Remote Access Enables Anywhere Administration Remote administration lets you troubleshoot and maintain systems from anywhere. On Linux, install an OpenSSH client on the source machine and an OpenSSH server on the target, then connect with ssh user@ip and authenticate. On Windows, use tools like WinRM or PuTTY for command-line access and RDP for remote GUI sessions. A small amount of setup—installing clients and servers and enabling remote access—pays off in long-term manageability.

Moving Files and Booting Over the Network File transfer services move data between machines without manual copying. FTP is a legacy, non-encrypted protocol; SFTP sends data through SSH and encrypts it. TFTP is simpler and unauthenticated, suited to generic files, and often used to host installation media. Pre-boot execution (PXE) network boot lets systems launch installers from a TFTP server, and secure shared access to files across machines is better handled by network file storage services.

Time Synchronization, Intranets, and Proxies in the Enterprise Network Time Protocol keeps clocks synchronized across systems so time-dependent services like Kerberos work reliably. You can run a local NTP server and point clients to it, or use public NTP servers; for large fleets, running your own is better etiquette. A common practice is to have a local NTP server sync from public servers, reducing external load while centralizing control. Intranets act as internal company websites for documentation, news, and collaboration accessible only on the corporate network. Proxy servers sit between the company and the internet to preserve privacy, enable monitoring and logging, and filter access to sites.

DNS for Websites and Internal Name Resolution DNS maps human-friendly names to IP addresses and is essential for both public websites and internal systems. For a website, purchase a domain from a registrar like GoDaddy or Bluehost, host the site yourself or via a provider, and either use the registrar’s DNS settings or run your own authoritative DNS to point the name to your server. Internally, name lookups check the local hosts file before querying DNS; maintaining hosts files on every machine doesn’t scale. A better approach is a local DNS server that holds all host-to-IP mappings, or integrating DNS with a directory service like Active Directory or OpenLDAP to auto-populate those records. Popular DNS software includes BIND and PowerDNS, and if resolution fails, verify connectivity with ping, test with nslookup, and check for hosts file entries that override DNS.

Dynamic Addressing with DHCP and DNS Integration Dynamic Host Configuration Protocol assigns IP addresses automatically, avoiding manual tracking of static addresses and easing future network expansion. Configure the DHCP server with the address range, local DNS server addresses, default gateway, and subnet mask. After installing DHCP server software—Windows Server includes one—set clients to obtain addresses via DHCP. When DHCP is configured to share DNS server locations, DNS can update host-to-IP mappings automatically as leases change.