ϟ
ISSN: 2197-17811863-7310
Undergraduate topics in computer science
Publisher:
Undergraduate topics in computer science is a journal published by .You can read and download all the PDFs for the journal Undergraduate topics in computer science here on OA.mg
DOI: 10.1007/978-1-4471-4069-6_1
2013
Algorithms, Numbers, and Machines
This chapter begins with a strange algorithm, Russian Peasant Multiplication, that may or may not work correctly. (Verifying correctness is a constant theme throughout this book, and the techniques used to decide such issues are explained thoroughly later.)
DOI: 10.1007/978-1-84882-914-5_12
2010
The Logic Programming Paradigm
In this chapter we analyse the other paradigm which, together with functional programming, supports declarative programming. The logic programming paradigm includes both theoretical and fully implemented languages, of which the best known is surely PROLOG. Even if there are big differences of a pragmatic and, for some, a theoretical nature between these languages, they all share the idea of interpreting computation as logical deduction. In this chapter, we will therefore examine these concepts while trying to limit the theoretical part. We also adopt the approach that has characterised the rest of the text while examining this paradigm. We do not mean therefore to teach programming in PROLOG, even if we present various examples of real programs, but we do intend to provide enough basis for understanding and, in a short time, mastering this and other logic programming languages.
DOI: 10.1007/978-1-4471-4156-3_12
2012
A Locally Optimizing Compiler
In this chapter we shall see that thinking in continuations is beneficial also when compiling micro-C to stack machine code. Generating stack machine code backwards may seem silly, but it enables the compiler to inspect the code that will consume the result of the code being generated. This permits the compiler to perform many optimizations (code improvement) easily.
DOI: 10.1007/978-1-4471-2164-0_3
2012
Locality and Synchronization
The discussion so far has assumed a shared memory address space with uniform access cost to every address. This assumption is not practical and in particular for multicore machines where some of the memory references made by a parallel program can be significantly longer than other memory references. In this chapter we consider a simplified model of parallel machines that demonstrates this claim and is used as a “formal” model to study ParC’s memory references. Though this model is not simulating a multicore machine it can be regarded as an intermediate stage between a uniform cost of shared memory references and the complexity of real multicore machines.KeywordsShared MemoryParallel MachineShared VariableCritical SectionMemory ReferenceThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
DOI: 10.1007/978-1-4471-2733-8_8
2012
Illumination and Shading
Modelling illumination and shading is essential for 3D scenes to obtain realistic images. Basic notions for illumination and shading like different light sources, reflection of light, shadows, transparent objects and textures on surfaces are explained. Apart from the standard illumination techniques, the radiosity model and ray tracing are introduced. The use of illumination and shading within Java 3D is demonstrated by numerous Java 3D examples.
DOI: 10.1007/978-1-84882-339-6_8
2009
Object-Oriented Data-Structures
DOI: 10.1007/978-0-85729-446-3_8
2011
Applications of the Derivative
This chapter is devoted to some applications of the derivative which form part of the basic skills in modelling. We start with a discussion of features of graphs. More precisely, we use the derivative to describe geometric properties like maxima, minima and monotonicity. Even though plotting functions with Matlab or maple is simple, understanding the connection with the derivative is important, for example, when a function with given properties is to be chosen from a particular class of functions.In the following section we discuss Newton’s method and the concept of order of convergence. Newton’s method is one of the most important tools for computing zeros of functions. It is nearly universally in use.The final section of this chapter is devoted to an elementary method from data analysis. We show how to compute a regression line through the origin. There are many areas of application that involve linear regression. This topic will be developed in more detail in Chap. 18.
DOI: 10.1007/978-0-85729-841-6_8
2011
Time Dependent Data
The world is not a static place—the conditions sensors describe vary over time. In some cases the variation is captured as a change in some state (e.g. light as day becomes night), in other cases the variation describes the reading (e.g. a motion detector) and in yet others the reading is encoded within a varying signal (e.g. sound). In this chapter we shall explore some of the techniques which can be applied to handling these time-varying signals. We review state and event based systems; frequency domain models, including Fourier transforms; and prediction methods.
DOI: 10.1007/978-1-4471-4884-5_7
2013
Estimating the Predictive Accuracy of a Classifier
This chapter is concerned with estimating the performance of a classifier (of any kind). Three methods are described for estimating a classifier’s predictive accuracy. The first of these is to divide the data available into a training set used for generating the classifier and a test set used for evaluating its performance. The other methods are k-fold cross-validation and its extreme form N-fold (or leave-one-out) cross-validation.A statistical measure of the accuracy of an estimate formed using any of these methods, known as standard error is introduced. Experiments to estimate the predictive accuracy of the classifiers generated for various datasets are described, including datasets with missing attribute values. Finally a tabular way of presenting classifier performance information called a confusion matrix is introduced, together with the notion of true and false positive and negative classifications.KeywordsPredictive AccuracyInformation GainConfusion MatrixAttribute SelectionNegative ClassificationThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
DOI: 10.1007/978-1-84882-339-6_1
2009
Expressions, Variables and Assignments
DOI: 10.1007/978-1-84800-302-6_7
2008
Coding and Unit Testing
DOI: 10.1007/978-1-4471-4357-4_7
2013
Internet Application-Layer Protocols
In the previous chapter, we saw how IP packets carry TCP segments or UDP datagrams across networks. Now it is time to look at what happens in the top layer of a TCP/IP-based network, the application layer. This chapter starts with an explanation of client–server technology, which underlies most Internet activities. We examine the following applications in turn: the domain name system (DNS), the World Wide Web, remote access, file transfer, E-mail, the delivery of streamed content over the Internet and voice over IP (VoIP). We discuss the main protocols for each of these applications. The chapter ends with brief descriptions of peer-to-peer (P2P) file sharing, instant messaging (IM) and microblogging.