2014 Latest Cisco 350-001 Dump Free Download(131-140)!

Which two of these elements need to be configured prior to enabling SSH? (Choose two.)

A.    hostname
B.    loopback address
C.    default gateway
D.    domain name
E.    SSH peer address

Answer: AD
To enable Secure Shell (SSHv2) version 2 (disable version 1) on a Cisco router an IOS with 3des encryption is required. When there is no SSH version configured, version 1 and 2 will be supported both.
Follow the next steps to enable SSH:
1. Configure the hostname command.
2. Configure the DNS domain.
3. Generate RSA key to be used.
4. Enable SSH transport support for the virtual type terminal (vty) Example SSH version 2 configuration:
hostname ssh-router
aaa new-model
username cisco password cisco
ip domain-name routers.local
Specifies which RSA keypair to use for SSH usage.
ip ssh rsa keypair-name sshkeys
Enables the SSH server for local and remote authentication on the router.
For SSH Version 2, the modulus size must be at least 768 bits. crypto key generate rsa usage-keys label
sshkeys modulus 768 !
Configures SSH control variables on your router.
ip ssh timeout 120
configure SSH version 2 (will disable SSH version 1)
ip ssh version 2
disable Telnet and enable SSH
line vty 0 4
transport input SSH
Commands to verify SSH configuration:
show ssh
show ip ssh
debug ip ssh

What is an important consideration that should be taken into account when configuring shaped
round robin?

A.    It enables policing.
B.    Strict priority is not supported.
C.    WRED must be previously enabled.
D.    It enables WRR.

Answer: B
First we need to understand how round robin algorithm works. The round robin uses multiple queues and dispatches one packet from each queue in each round with no prioritization. For example, it dispatches:
Dispatch one packet from Queue 1
Dispatch one packet from Queue 2
Dispatch one packet from Queue 3
Repeat from Queue 1
There are three implementations of Round Robin scheduling on the Catalyst 6500 and they include Weighted Round Robin (WRR), Deficit Weighted Round Robin (DWRR) and Shaped Round Robin (SRR).
The Weighted Round Robin allows prioritization, meaning that it assigns a “weight” to each queue and dispatches packets from each queue proportionally to an assigned weight. For example:
Dispatch 3 packets from Queue 1 (Weight 3)
Dispatch 2 packets from Queue 2 (Weight 2)
Dispatch 1 packet from Queue 1 (Weight 1)
Repeat from Queue 1 (dispatch 3 next packets)
Unlike Priority Queuing, which always empties the first queue before going to the next queue, this kind of queue prevents starvation of other applications such as if a large download is in progress. The Weighted Round Robin can be used with Strict Priority by setting its weight to 0. That means
packets in the other queues will not be serviced until queue 4 is emptied. The problem of WRR is the router is allowed to send the entire packet even if the sum of all bytes is more than the threshold and can make other applications starved. The Deficit Round Robin solves problem of WRR by keeping track of the number of “extra” bytes dispatched in each round ?the “deficit” and then add the “deficit” to the number of bytes dispatched in the next round. Shaped Round Robin (SRR) is scheduling service for specifying the rate at which packets are dequeued. With SRR there are two modes, shaped and shared. Shaped mode is only available on the egress queues. Shaped egress queues reserve a set of port bandwidth and then send evenly spaced packets as per the reservation. Shared egress queues are also guaranteed a configured share of bandwidth, but do not reserve the bandwidth. That is, in shared mode, if a higher priority queue is empty, instead of the servicer waiting for that reserved bandwidth to expire, the lower priority queue can take the unused bandwidth. Neither shaped SRR nor shared SRR is better than the other. Shared SRR is used to get the maximum efficiency out of a queuing system, because unused time slots can be reused by queues with excess traffic. This is not possible in a standard Weighted Round Robin. Shaped SRR is used to shape a queue or set a hard limit on how much bandwidth a queue can use. When you use shaped SRR, you can shape queues within a port’s overall shaped rate.
http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps7078/prod_qas0900aecd805bacc 7.html

Which of the following is the encryption algorithm used for priv option when using SNMPv3?

B.    HMAC-MD5
D.    AES
E.    3DES

Answer: C
Feature Summary
Simple Network Management Protocol Version 3 (SNMPv3) is an interoperable standards-based protocol for network management. SNMPv3 provides secure access to devices by a combination of authenticating and encrypting packets over the network. The security features provided in SNMPv3 are:
Message integrity–Ensuring that a packet has not been tampered with in-transit. Authentication–Determining the message is from a valid source. Encryption–Scrambling the contents of a packet prevent it from being seen by an unauthorized source.
SNMPv3 provides for both security models and security levels. A security model is an authentication strategy that is set up for a user and the group in which the user resides. A security level is the permitted level of security within a security model. A combination of a security model and a security level will determine which security mechanism is employed when handling an SNMP packet. Three security models are available:
SNMPv1, SNMPv2c, and SNMPv3. Table 1 identifies what the combinations of security models and levels mean:
Table 1 SNMP Security Models and Levels

http://www.cisco.com/en/US/docs/ios/12_0t/12_0t3/feature/guide/Snmp3.html#wp4363 http://www.cisco.com/en/US/docs/ios/12_0t/12_0t3/feature/guide/Snmp3.html http://www.cisco.com/en/US/docs/ios/12_4t/12_4t2/snmpv3ae.html

Which RMON group stores statistics for conversations between sets of two addresses?

A.    hostTopN
B.    matrix
C.    statistics
D.    history
E.    packet capture
F.    host

Answer: B
RMON tables can be created for buffer capture, filter, hosts, and matrix information. The buffer capture table details a list of packets captured off of a channel or a logical data or events stream. The filter table details a list of packet filter entries that screen packets for specified conditions as they travel between interfaces. The hosts table details a list of host entries. The matrix table details a list of traffic matrix entries indexed by source and destination MAC addresses.

Which of the following describes the appropriate port assignment and message exchange in a
standard TFTP transaction?

A.    Server: RRQ/WRQ Sent
Client: RRQ/WRQ Received
B.    Server: RRQ/WRQ Received
Client: RRQ/WRQ Received
C.    Server: RRQ/WRQ Received
Client: RRQ/WRQ Sent
D.    Server: RRQ/WRQ Received
Client: RRQ/WRQ Sent
E.    Server: RRQ/WRQ Sent
Client: RRQ/WRQ Sent
F.    Server: RRQ/WRQ Received
Client: RRQ/WRQ Sent

Answer: D
TFTP Daemons listen on UDP port 69 but respond from a dynamically allocated high port. Therefore, enabling this port will allow the TFTP service to receive incoming TFTP requests but will not allow the selected server to respond to those requests. Allowing the selected server to respond to inbound TFTP requests cannot be accomplished unless the TFTP server is configured to respond from port 69.
http://en.wikipedia.org/wiki/Trivial_File_Transfer_Protocol http://social.technet.microsoft.com/Forums/en-CA/configmgrosd/thread/9b9bd9e2-6b2e-4073- 96af-2703ad6a3249

You are responsible for network monitoring and need to monitor traffic over a routed network from a remote source to an IDS or IPS located in the headquarters site. What would you use in order to accomplish this?

A.    VACLs and VSPAN
D.    NetFlow

Answer: C

What is the default maximum reservable bandwidth (percentage) by any single flow on an
interface after enabling RSVP?

A.    75 percent
B.    60 percent
C.    56 percent
D.    50 percent
E.    25 percent

Answer: A
You must plan carefully to successfully configure and use RSVP on your network. At a minimum, RSVP must reflect your assessment of bandwidth needs on router interfaces. Consider the following questions as you plan for RSVP configuration:
How much bandwidth should RSVP allow per end-user application flow? You must understand the “feeds and speeds” of your applications. By default, the amount reservable by a single flow can be the entire reservable bandwidth. You can, however, limit individual reservations to smaller amounts using the single flow bandwidth parameter. This value may not exceed the interface reservable amount, and no one flow may reserve more than the amount specified. How much bandwidth is available for RSVP? By default, 75 percent of the bandwidth available on an interface is reservable. If you are using a tunnel interface, RSVP can make a reservation for the tunnel whose bandwidth is the sum of the bandwidths reserved within the tunnel. How much bandwidth must be excluded from RSVP so that it can fairly provide the timely service required by low-volume data conversations? End-to-end controls for data traffic assumes that all sessions will behave so as to avoid congestion dynamically. Real-time demands do not follow this behavior. Determine the bandwidth to set aside so bursty data traffic will not be deprived as a side effect of the RSVP QOS configuration.

Which two protocols can have their headers compressed through MQC? (Choose two.)

A.    RTP
B.    RTSP
C.    HTTP
D.    TCP
E.    UDP

Answer: AD
RTP or TCP IP header compression is a mechanism that compresses the IP header in a data packet before the packet is transmitted. Header compression reduces network overhead and speeds up transmission of RTP and TCP packets.
Cisco IOS software provides a related feature called Express RTP/TCP Header Compression. Before this feature was available, if compression of TCP or RTP headers was enabled, compression was performed in the process-switching path. Compression performed in this manner meant that packets traversing interfaces that had TCP or RTP header compression enabled were queued and passed up the process to be switched. This procedure slowed down transmission of the packet, and therefore some users preferred to fast-switch uncompressed TCP and RTP packets. Now, if TCP or RTP header compression is enabled, it occurs by default in the fast-switched path or the Cisco Express Forwarding-switched (CEF-switched) path, depending on which switching method is enabled on the interface. Furthermore, the number of TCP and RTP header compression connections was increased.
If neither fast-switching nor CEF-switching is enabled, then if TCP or RTP header compression is enabled, it will occur in the process-switched path as before. The Express RTP and TCP Header Compression feature has the following benefits:
1. It reduces network overhead.
2. It speeds up transmission of TCP and RTP packets. The faster speed provides a greater benefit on slower links than faster links.

You have a router running BGP for the MPLS network and OSPF for the local LAN network at the
sales office. A route is being learned from the MPLS network that also exists on the OSPF local
network. It is important that the router chooses the local LAN route being learned from the
downstream switch running OSPF rather than the upstream BGP neighbor. Also, if the local OSPF
route goes away, the BGP route needs to be used. What should be configured to make sure that
the router will choose the LAN network as the preferred path?

A.    static route needs to be added
B.    floating static route needs to be added
C.    bgp backdoor command
D.    ospf backdoor command

Answer: C
Congestion control
The Frame Relay network uses a simplified protocol at each switching node. It achieves simplicity by omitting link-by-link flow-control. As a result, the offered load has largely determined the performance of Frame Relay networks. When offered load is high, due to the bursts in some services, temporary overload at some Frame Relay nodes causes a collapse in network throughput. Therefore, frame-relay networks require some effective mechanisms to control the congestion. Congestion control in frame-relay networks includes the following elements:
Admission Control provides the principal mechanism used in Frame Relay to ensure the guarantee of resource requirement once accepted. It also serves generally to achieve high network performance. The network decides whether to accept a new connection request, based on the relation of the requested traffic descriptor and the network’s residual capacity. The traffic descriptor consists of a set of parameters communicated to the switching nodes at call set-up time or at service-subscription time, and which characterizes the connection’s statistical properties.
The traffic descriptor consists of three elements:
Committed Information Rate (CIR) – The average rate (in bit/s) at which the network guarantees to transfer information units over a measurement interval T. This T interval is defined as: T = Bc/CIR. Committed Burst Size (BC) – The maximum number of information units transmittable during the interval T. Excess Burst Size (BE) – The maximum number of uncommitted information units (in bits) that the network will attempt to carry during the interval.
Once the network has established a connection, the edge node of the Frame Relay network must monitor the connection’s traffic flow to ensure that the actual usage of network resources does not exceed this specification. Frame Relay defines some restrictions on the user’s information rate. It allows the network to enforce the end user’s information rate and discard information when the subscribed access rate is exceeded.
Explicit congestion notification is proposed as the congestion avoidance policy. It tries to keep the network operating at its desired equilibrium point so that a certain Quality of Service (QoS) for the network can be met. To do so, special congestion control bits have been incorporated into the address field of the Frame Relay:
FECN and BECN. The basic idea is to avoid data accumulation inside the network. FECN means Forward Explicit Congestion Notification. The FECN bit can be set to 1 to indicate that congestion was experienced in the direction of the frame transmission, so it informs the destination that congestion has occurred. BECN means Backwards Explicit Congestion Notification. The BECN bit can be set to 1 to indicate that congestion was experienced in the network in the direction opposite of the frame transmission, so it informs the sender that congestion has occurred.

In BGP routing, what does the rule of synchronization mean?

A.    A BGP router can only advertise an EBGP learned route, provided that the route is an IGP route in
the routing table.
B.    A BGP router can only advertise an IBGP learned route, provided that the route is an IGP route in
the routing table.
C.    A BGP router can only advertise an IBGP learned route, provided that the route is an IGP route
that is not in the routing table.
D.    A BGP router can only advertise an EBGP learned route, provided that the route is a metric of 0 in
the BGP table.

Answer: B
When an AS provides transit service to other ASs and if there are non-BGP routers in the AS, transit traffic might be dropped if the intermediate non-BGP routers have not learned routes for that traffic via an IGP. The BGP synchronization rule states that if an AS provides transit service to another AS, BGP should not advertise a route until all of the routers within the AS have learned about the route via an IGP. The topology shown in demonstrates the synchronization rule

If you want to pass the Cisco 350-001 Exam sucessfully, recommend to read latest Cisco 350-001 Dump full version.