question
stringlengths
6
13.7k
text
stringlengths
1
25.8k
source
stringclasses
3 values
It is often said that hash table lookup operates in constant time: you compute the hash value, which gives you an index for an array lookup. Yet this ignores collisions; in the worst case, every item happens to land in the same bucket and the lookup time becomes linear ($\Theta(n)$). Are there conditions on the data that can make hash table lookup truly $O(1)$? Is that only on average, or can a hash table have $O(1)$ worst case lookup? Note: I'm coming from a programmer's perspective here; when I store data in a hash table, it's almost always strings or some composite data structures, and the data changes during the lifetime of the hash table. So while I appreciate answers about perfect hashes, they're cute but anecdotal and not practical from my point of view. P.S. Follow-up: For what kind of data are hash table operations O(1)?
There are two settings under which you can get $O(1)$ worst-case times. If your setup is static, then FKS hashing will get you worst-case $O(1)$ guarantees. But as you indicated, your setting isn't static. If you use Cuckoo hashing, then queries and deletes are $O(1)$ worst-case, but insertion is only $O(1)$ expected. Cuckoo hashing works quite well if you have an upper bound on the total number of inserts, and set the table size to be roughly 25% larger. There's more information here.
https://api.stackexchange.com
Is there any resource (paper, blogpost, Github gist, etc.) describing the BWA-MEM algorithm for assigning mapping qualities? I vaguely remember that I have somewhere seen a formula for SE reads, which looked like $C * (s_1 - s_2) / s_1,$ where $s_1$ and $s_2$ denoted the alignment scores of two best alignments and C was some constant. I believe that a reimplementation of this algorithm in some scripting language could be very useful for the bioinfo community. For instance, I sometimes test various mapping methods and some of them tend to find good alignments, but fail in assigning appropriate qualities. Therefore, I would like to re-assign all the mapping qualities in a SAM file with the BWA-MEM algorithm. Btw. This algorithm must already have been implemented outside BWA, see the BWA-MEM paper: GEM does not compute mapping quality. Its mapping quality is estimated with a BWA-like algorithm with suboptimal alignments available. Unfortunately, the BWA-MEM paper repo contains only the resulting .eval files. Update: The question is not about the algorithm for computing alignment scores. Mapping qualities and alignment scores are two different things: Alignment score quantifies the similarity between two sequences (e.g., a read and a reference sequence) Mapping quality (MAQ) quantifies the probability that a read is aligned to a wrong position. Even alignments with high scores can have a very low mapping quality.
Yes, there bwa-mem was published as a preprint BWA-MEM’s seed extension differs from the standard seed extension in two aspects. Firstly, suppose at a certain extension step we come to reference position x with the best extension score achieved at query position y. ... Secondly, while extending a seed, BWA-MEM tries to keep track of the best extension score reaching the end of the query sequence And there is a description of the scoring algorithm directly in the source code of bwa-mem (lines 22 - 44), but maybe the only solution is really to go though the source code.
https://api.stackexchange.com
I hope this is the right place to ask this question. Suppose I found a small irregular shaped rock, and I wish to find the surface area of the rock experimentally. Unlike for volume, where I can simply use Archimedes principle, I cannot think of a way to find the surface area. I would prefer an accuracy to at least one hundredth of the stone size. How can I find the surface area experimentally?
I would ignore answers that say the surface area is ill-defined. In any realistic situation you have a lower limit for how fine a resolution is meaningful. This is like a pedant who says that hydrogen has an ill-defined volume because the electron wavefunction has no hard cutoff. Technically true, but practically not meaningful. My recommendation is an optical profilometer, which can measure the surface area quite well (for length scales above 400nm). This method uses a coherent laser beam and interferometry to map the topography of the material's surface. Once you have the topography you can integrate it to get the surface area. Advantages of this method include: non-contact, non-destructive, variable surface area resolution to suit your needs, very fast (seconds to minutes), doesn't require any consumables besides electricity. Disadvantages include: you have to flip over your rock to get all sides and stitch them together to get the total topography, the instruments are too expensive for casual hobbyists (many thousands of dollars), no atomic resolution (but Scanning tunneling microscopy is better for that). The optics for these instruments look like below And it gives a topographic map like below. (source: psu.edu)
https://api.stackexchange.com
I was watching a nice little video on youtube but couldn't help but notice how snappy smaller animals such as rats and chipmunks move. By snappy I mean how the animal moves in almost discrete states pausing between each movement. Is this a trivial observation or something inherent in the neuro-synapse or muscular make-up of these animals?
Short answer Intermittent locomotion can increase the detection of prey by predators (e.g. rats), while it may lead to reduced attack rates in prey animals (e.g., rats and chipmunks). It may also increase physical endurance. Background Rather than moving continuously through the environment, many animals interrupt their locomotion with frequent brief pauses. Pauses increase the time required to travel a given distance and add costs of acceleration and deceleration to the energetic cost of locomotion. From an adaptation perspective, pausing should provide benefits that outweigh these costs (Adam & kramer, 1998). One potential benefit of pausing is increased detection of prey by predators. Slower movement speeds likely improve prey detection by providing more time to scan a given visual field. A second plausible benefit is reduced attack rate by predators. Many predators are more likely to attack moving prey, perhaps because such prey is more easily detected or recognized. Indeed, motionlessness (‘freezing’) is a widespread response by prey that detect a predator. A third benefit may be increased endurance. For animals moving faster than their aerobically sustainable speeds, the maximum distance run can be increased by taking pauses. These pauses allow the clearance of lactate from the muscles through aerobic mechanisms. PS: If you mean with 'snappy' not only that small animals move intermittently, but also 'fast', then Remi.b's answers nicely covers the story why small critters are quick. Basically, it comes down to Newton's second law, namely acceleration is inversely proportional to mass (a = F/m), but the size of muscle power is not. Hence, bigger animals have more mass and need a lot more force to build up to accelerate at the same speed. That build up of force needs time (ever witnessed the vertical lift-off of a space shuttle?) Hence, small critters accelerate quicker and allow them to move 'snappy'. Reference - Adam & kramer, Anim Behav (1998); 55: 109–117
https://api.stackexchange.com
Background: I think I might want to port some code that calculates matrix exponential-vector products using a Krylov subspace method from MATLAB to Python. (Specifically, Jitse Niesen's expmvp function, which uses an algorithm described in this paper.) However, I know that unless I make heavy use of functions from modules derived from compiled libraries (i.e., I only use raw Python, and not many built-in functions), then it could be quite slow. Question: What tools or approaches are available to help me speed up code I write in Python for performance? In particular, I'm interested in tools that automate the process as much as possible, though general approaches are also welcome. Note: I have an older version of Jitse's algorithm, and haven't used it in a while. It could be very easy to make this code fast, but I felt like it would make a good concrete example, and it is related to my own research. Debating my approach for implementing this particular algorithm in Python is another question entirely.
I'm going to break up my answer into three parts. Profiling, speeding up the python code via c, and speeding up python via python. It is my view that Python has some of the best tools for looking at what your code's performance is then drilling down to the actual bottle necks. Speeding up code without profiling is about like trying to kill a deer with an uzi. If you are really only interested in mat-vec products, I would recommend scipy.sparse. Python tools for profiling profile and cProfile modules: These modules will give you your standard run time analysis and function call stack. It is pretty nice to save their statistics and using the pstats module you can look at the data in a number of ways. kernprof: this tool puts together many routines for doing things like line by line code timing memory_profiler: this tool produces line by line memory foot print of your code. IPython timers: The timeit function is quite nice for seeing the differences in functions in a quick interactive way. Speeding up Python Cython: cython is the quickest way to take a few functions in python and get faster code. You can decorate the function with the cython variant of python and it generates c code. This is very maintable and can also link to other hand written code in c/c++/fortran quite easily. It is by far the preferred tool today. ctypes: ctypes will allow you to write your functions in c and then wrap them quickly with its simple decoration of the code. It handles all the pain of casting from PyObjects and managing the gil to call the c function. Other approaches exist for writing your code in C but they are all somewhat more for taking a C/C++ library and wrapping it in Python. Python-only approaches If you want to stay inside Python mostly, my advice is to figure out what data you are using and picking correct data types for implementing your algorithms. It has been my experience that you will usually get much farther by optimizing your data structures then any low level c hack. For example: numpy: a contingous array very fast for strided operations of arrays numexpr: a numpy array expression optimizer. It allows for multithreading numpy array expressions and also gets rid of the numerous temporaries numpy makes because of restrictions of the Python interpreter. blist: a b-tree implementation of a list, very fast for inserting, indexing, and moving the internal nodes of a list pandas: data frames (or tables) very fast analytics on the arrays. pytables: fast structured hierarchical tables (like hdf5), especially good for out of core calculations and queries to large data.
https://api.stackexchange.com
I am currently doing the Udacity Deep Learning Tutorial. In Lesson 3, they talk about a 1x1 convolution. This 1x1 convolution is used in Google Inception Module. I'm having trouble understanding what is a 1x1 convolution. I have also seen this post by Yann Lecun. Could someone kindly explain this to me?
Suppose that I have a conv layer which outputs an $(N, F, H, W)$ shaped tensor where: $N$ is the batch size $F$ is the number of convolutional filters $H, W$ are the spatial dimensions Suppose the input is fed into a conv layer with $F_1$ 1x1 filters, zero padding and stride 1. Then the output of this 1x1 conv layer will have shape $(N, F_1, H , W)$. So 1x1 conv filters can be used to change the dimensionality in the filter space. If $F_1 > F$ then we are increasing dimensionality, if $F_1 < F$ we are decreasing dimensionality, in the filter dimension. Indeed, in the Google Inception article Going Deeper with Convolutions, they state (bold is mine, not by original authors): One big problem with the above modules, at least in this naive form, is that even a modest number of 5x5 convolutions can be prohibitively expensive on top of a convolutional layer with a large number of filters. This leads to the second idea of the proposed architecture: judiciously applying dimension reductions and projections wherever the computational requirements would increase too much otherwise. This is based on the success of embeddings: even low dimensional embeddings might contain a lot of information about a relatively large image patch...1x1 convolutions are used to compute reductions before the expensive 3x3 and 5x5 convolutions. Besides being used as reductions, they also include the use of rectified linear activation which makes them dual-purpose. So in the Inception architecture, we use the 1x1 convolutional filters to reduce dimensionality in the filter dimension. As I explained above, these 1x1 conv layers can be used in general to change the filter space dimensionality (either increase or decrease) and in the Inception architecture we see how effective these 1x1 filters can be for dimensionality reduction, explicitly in the filter dimension space, not the spatial dimension space. Perhaps there are other interpretations of 1x1 conv filters, but I prefer this explanation, especially in the context of the Google Inception architecture.
https://api.stackexchange.com
I was just sitting with my hand next to my nose and I realized that air was only coming out of the right nostril. Why is that? I would think I would use both, it seems much more efficient. Have I always only been breathing out of my right nostril?
Apparently you're not the first person to notice this; in 1895, a German nose specialist called Richard Kayser found that we have tissue called erectile tissue in our noses (yes, it is very similar to the tissue found in a penis). This tissue swells in one nostril and shrinks in the other, creating an open airway via only one nostril. What's more, he found that this is indeed a 'nasal cycle', changing every 2.5 hours or so. Of course, the other nostril isn't completely blocked, just mostly. If you try, you can feel a very light push of air out of the blocked nostril. This is controlled by the autonomic nervous system. You can change which nostril is closed and which is open by laying on one side to open the opposite one. Interestingly, some researchers think that this is the reason we often switch the sides we lay on during sleep rather regularly, as it is more comfortable to sleep on the side with the blocked nostril downwards. As to why we don't breathe through both nostrils simultaneously, I couldn't find anything that explains it. Sources: About 85% of People Only Breathe Out of One Nostril at a Time Nasal cycle
https://api.stackexchange.com
Evolution is often mistakenly depicted as linear in popular culture. One main feature of this depiction in popular culture, but even in science popularisation, is that some ocean-dwelling animal sheds its scales and fins and crawls onto land. Of course, this showcases only one ancestral lineage for one specific species (Homo sapiens). My question is: Where else did life evolve out of water onto land? Intuitively, this seems like a huge leap to take (adapting to a fundamentally alien environment) but it still must have happend several times (separately at least for plants, insects and chordates, since their respective most recent common ancestor is sea-dwelling). In fact, the more I think of it the more examples I find.
I doubt we know the precise number, or even anywhere near it. But there are several well-supported theorised colonisations which might interest you and help to build up a picture of just how common it was for life to transition to land. We can also use known facts about when different evolutionary lineages diverged, along with knowledge about the earlier colonisations of land, to work some events out for ourselves. I've done it here for broad taxonomic clades at different scales - if interested you could do the same thing again for lower sub-clades. As you rightly point out, there must have been at least one colonisation event for each lineage present on land which diverged from other land-present lineages before the colonisation of land. Using the evidence and reasoning I give below, at the very least, the following 9 independent colonisations occurred: bacteria cyanobacteria archaea protists fungi algae plants nematodes arthropods vertebrates Bacterial and archaean colonisation The first evidence of life on land seems to originate from 2.6 (Watanabe et al., 2000) to 3.1 (Battistuzzi et al., 2004) billion years ago. Since molecular evidence points to bacteria and archaea diverging between 3.2-3.8 billion years ago (Feng et al.,1997 - a classic paper), and since both bacteria and archaea are found on land (e.g. Taketani & Tsai, 2010), they must have colonised land independently. I would suggest there would have been many different bacterial colonisations, too. One at least is certain - cyanobacteria must have colonised independently from some other forms, since they evolved after the first bacterial colonisation (Tomitani et al., 2006), and are now found on land, e.g. in lichens. Protistan, fungal, algal, plant and animal colonisation Protists are a polyphyletic group of simple eukaryotes, and since fungal divergence from them (Wang et al., 1999 - another classic) predates fungal emergence from the ocean (Taylor & Osborn, 1996), they must have emerged separately. Then, since plants and fungi diverged whilst fungi were still in the ocean (Wang et al., 1999), plants must have colonised separately. Actually, it has been explicitly discovered in various ways (e.g. molecular clock methods, Heckman et al., 2001) that plants must have left the ocean separately to fungi, but probably relied upon them to be able to do it (Brundrett, 2002 - see note at bottom about this paper). Next, simple animals... Arthropods colonised the land independently (Pisani et al, 2004), and since nematodes diverged before arthropods (Wang et al., 1999), they too must have independently found land. Then, lumbering along at the end, came the tetrapods (Long & Gordon, 2004). Note about the Brundrett paper: it has OVER 300 REFERENCES! That guy must have been hoping for some sort of prize. References Battistuzzi FU, Feijao A, Hedges SB. 2004. A genomic timescale of prokaryote evolution: insights into the origin of methanogenesis, phototrophy, and the colonization of land. BMC Evol Biol 4: 44. Brundrett MC. 2002. Coevolution of roots and mycorrhizas of land plants. New Phytologist 154: 275–304. Feng D-F, Cho G, Doolittle RF. 1997. Determining divergence times with a protein clock: Update and reevaluation. Proceedings of the National Academy of Sciences 94: 13028 –13033. Heckman DS, Geiser DM, Eidell BR, Stauffer RL, Kardos NL, Hedges SB. 2001. Molecular Evidence for the Early Colonization of Land by Fungi and Plants. Science 293: 1129 –1133. Long JA, Gordon MS. 2004. The Greatest Step in Vertebrate History: A Paleobiological Review of the Fish‐Tetrapod Transition. Physiological and Biochemical Zoology 77: 700–719. Pisani D, Poling LL, Lyons-Weiler M, Hedges SB. 2004. The colonization of land by animals: molecular phylogeny and divergence times among arthropods. BMC Biol 2: 1. Taketani RG, Tsai SM. 2010. The influence of different land uses on the structure of archaeal communities in Amazonian anthrosols based on 16S rRNA and amoA genes. Microb Ecol 59: 734–743. Taylor TN, Osborn JM. 1996. The importance of fungi in shaping the paleoecosystem. Review of Palaeobotany and Palynology 90: 249–262. Wang DY, Kumar S, Hedges SB. 1999. Divergence time estimates for the early history of animal phyla and the origin of plants, animals and fungi. Proc Biol Sci 266: 163–171. Watanabe Y, Martini JEJ, Ohmoto H. 2000. Geochemical evidence for terrestrial ecosystems 2.6 billion years ago. Nature 408: 574–578.
https://api.stackexchange.com
I read the definition of work as $$W ~=~ \vec{F} \cdot \vec{d}$$ $$\text{ Work = (Force) $\cdot$ (Distance)}.$$ If a book is there on the table, no work is done as no distance is covered. If I hold up a book in my hand and my arm is stretched, if no work is being done, where is my energy going?
While you do spend some body energy to keep the book lifted, it's important to differentiate it from physical effort. They are connected but are not the same. Physical effort depends not only on how much energy is spent, but also on how energy is spent. Holding a book in a stretched arm requires a lot of physical effort, but it doesn't take that much energy. In the ideal case, if you manage to hold your arm perfectly steady, and your muscle cells managed to stay contracted without requiring energy input, there wouldn't be any energy spent at all because there wouldn't be any distance moved. On real scenarios, however, you do spend (chemical) energy stored within your body, but where is it spent? It is spent on a cellular level. Muscles are made with filaments which can slide relative to one another, these filaments are connected by molecules called myosin, which use up energy to move along the filaments but detach at time intervals to let them slide. When you keep your arm in position, myosins hold the filaments in position, but when one of them detaches other myosins have to make up for the slight relaxation locally. Chemical energy stored within your body is released by the cell as both work and heat.* Both on the ideal and the real scenarios we are talking about the physical definition of energy. On your consideration, you ignore the movement of muscle cells, so you're considering the ideal case. A careful analysis of the real case leads to the conclusion that work is done and heat is released, even though the arm itself isn't moving. * Ultimately, the work done by the cells is actually done on other cells, which eventually dissipates into heat due to friction and non-elasticity. So all the energy you spend is invested in keeping the muscle tension and eventually dissipated as heat.
https://api.stackexchange.com
I have asked a lot of questions on coordination chemistry here before and I have gone through a lot others here as well. Students, including me, attempt to answer those questions using the concept of hybridization because that's what we are taught in class and of course it's easier and more intuitive than crystal field theory/molecular orbital theory. But almost all of the times that I attempted to use the concept of hybridization to explain bonding, somebody comes up and tells that it's wrong. How do you determine the hybridisation state of a coordinate complex? This is a link to one such question and the first thing that the person who answered it says: "Again, I feel a bit like a broken record. You should not use hybridization to describe transition metal complexes." I need to know: Why is it wrong? Is it wrong because it's oversimplified? Why does it work well while explaining bonding in other compounds? What goes wrong in the case of transition metals?
Tetrahedral complexes Let's consider, for example, a tetrahedral $\ce{Ni(II)}$ complex ($\mathrm{d^8}$), like $\ce{[NiCl4]^2-}$. According to hybridisation theory, the central nickel ion has $\mathrm{sp^3}$ hybridisation, the four $\mathrm{sp^3}$-type orbitals are filled by electrons from the chloride ligands, and the $\mathrm{3d}$ orbitals are not involved in bonding. Already there are several problems with this interpretation. The most obvious is that the $\mathrm{3d}$ orbitals are very much involved in (covalent) bonding: a cursory glance at a MO diagram will show that this is the case. If they were not involved in bonding at all, they should remain degenerate, which is obviously untrue; and even if you bring in crystal field theory (CFT) to say that there is an ionic interaction, it is still not sufficient. If accuracy is desired, the complex can only really be described by a full MO diagram. One might ask why we should believe the MO diagram over the hybridisation picture. The answer is that there is a wealth of experimental evidence, especially electronic spectroscopy ($\mathrm{d-d^*}$ transitions being the most obvious example), and magnetic properties, that is in accordance with the MO picture and not the hybridisation one. It is simply impossible to explain many of these phenomena using this $\mathrm{sp^3}$ model. Lastly, hybridisation alone cannot explain whether a complex should be tetrahedral ($\ce{[NiCl4]^2-}$) or square planar ($\ce{[Ni(CN)4]^2-}$, or $\ce{[PtCl4]^2-}$). Generally the effect of the ligand, for example, is explained using the spectrochemical series. However, hybridisation cannot account for the position of ligands in the spectrochemical series! To do so you would need to bring in MO theory. Octahedral complexes Moving on to $\ce{Ni(II)}$ octahedral complexes, like $\ce{[Ni(H2O)6]^2+}$, the typical explanation is that there is $\mathrm{sp^3d^2}$ hybridisation. But all the $\mathrm{3d}$ orbitals are already populated, so where do the two $\mathrm{d}$ orbitals come from? The $\mathrm{4d}$ set, I suppose. The points raised above for tetrahedral case above still apply here. However, here we have something even more criminal: the involvement of $\mathrm{4d}$ orbitals in bonding. This is simply not plausible, as these orbitals are energetically inaccessible. On top of that, it is unrealistic to expect that electrons will be donated into the $\mathrm{4d}$ orbitals when there are vacant holes in the $\mathrm{3d}$ orbitals. For octahedral complexes where there is the possibility for high- and low-spin forms (e.g., $\mathrm{d^5}$ $\ce{Fe^3+}$ complexes), hybridisation theory becomes even more misleading: Hybridisation theory implies that there is a fundamental difference in the orbitals involved in metal-ligand bonding for the high- and low-spin complexes. However, this is simply not true (again, an MO diagram will illustrate this point). And the notion of $\mathrm{4d}$ orbitals being involved in bonding is no more realistic than it was in the last case, which is to say, utterly unrealistic. In this situation, one also has the added issue that hybridisation theory provides no way of predicting whether a complex is high- or low-spin, as this again depends on the spectrochemical series. Summary Hybridisation theory, when applied to transition metals, is both incorrect and inadequate. It is incorrect in the sense that it uses completely implausible ideas ($\mathrm{3d}$ metals using $\mathrm{4d}$ orbitals in bonding) as a basis for describing the metal complexes. That alone should cast doubt on the entire idea of using hybridisation for the $\mathrm{3d}$ transition metals. However, it is also inadequate in that it does not explain the rich chemistry of the transition metals and their complexes, be it their geometries, spectra, reactivities, or magnetic properties. This prevents it from being useful even as a predictive model. What about other chemical species? You mentioned that hybridisation works well for "other compounds." That is really not always the case, though. For simple compounds like water, etc. there are already issues associated with the standard VSEPR/hybridisation theory. Superficially, the $\mathrm{sp^3}$ hybridisation of oxygen is consistent with the observed bent structure, but that's just about all that can be explained. The photoelectron spectrum of water shows very clearly that the two lone pairs on oxygen are inequivalent, and the MO diagram of water backs this up. Apart from that, hybridisation has absolutely no way of explaining the structures of boranes; Wade's rules do a much better job with the delocalised bonding. And these are just Period 2 elements - when you go into the chemistry of the heavier elements, hybridisation generally becomes less and less useful a concept. For example, hypervalency is a huge problem: $\ce{SF6}$ is claimed to be $\mathrm{sp^3d^2}$ hybridised, but in fact $\mathrm{d}$-orbital involvement in bonding is negligible. On the other hand, non-hypervalent compounds, such as $\ce{H2S}$, are probably best described as unhybridised - what happened to the theory that worked so well for $\ce{H2O}$? It just isn't applicable here, for reasons beyond the scope of this post. There is probably one scenario in which it is really useful, and that is when describing organic compounds. The reason for this is because tetravalent carbon tends to conform to the simple categories of $\mathrm{sp}^n$ $(n \in \{1, 2, 3\})$; we don't have the same teething issues with $\mathrm{d}$-orbitals that have been discussed above. But there are caveats. For example, it is important to recognise that it is not atoms that are hybridised, but rather orbitals: for example, each carbon in cyclopropane uses $\mathrm{sp^5}$ orbitals for the $\ce{C-C}$ bonds and $\mathrm{sp^2}$ orbitals for the $\ce{C-H}$ bonds. The bottom line is that every model that we use in chemistry has a range of validity, and we should be careful not to use a model in a context where it is not valid. Hybridisation theory is not valid in the context of transition metal complexes, and should not be used as a means of explaining their structure, bonding, and properties.
https://api.stackexchange.com
I put a pot of water in the oven at $\mathrm{500^\circ F}$ ($\mathrm{260^\circ C}$ , $\mathrm{533 K}$). Over time most of the water evaporated away but it never boiled. Why doesn't it boil?
The "roiling boil" is a mechanism for moving heat from the bottom of the pot to the top. You see it on the stovetop because most of the heat generally enters the liquid from a superheated surface below the pot. But in a convection oven, whether the heat enters from above, from below, or from both equally depends on how much material you are cooking and the thermal conductivity of its container. I had an argument about this fifteen years ago which I settled with a great kitchen experiment. I put equal amounts of water in a black cast-iron skillet and a glass baking dish with similar horizontal areas, and put them in the same oven. (Glass is a pretty good thermal insulator; the relative thermal conductivities and heat capacities of aluminum, stainless steel, and cast iron surprise me whenever I look them up.) After some time, the water in the iron skillet was boiling like gangbusters, but the water in the glass was totally still. A slight tilt of the glass dish, so that the water touched a dry surface, was met with a vigorous sizzle: the water was keeping the glass temperature below the boiling point where there was contact, but couldn't do the same for the iron. When I pulled the two pans out of the oven, the glass pan was missing about half as much water as the iron skillet. I interpreted this to mean that boiling had taken place from the top surface only of the glass pan, but from both the top and bottom surfaces of the iron skillet. Note that it is totally possible to get a bubbling boil from an insulating glass dish in a hot oven; the bubbles are how you know when the lasagna is ready. (A commenter reminds me that I used the "broiler" element at the top of the oven rather than the "baking" element at the bottom of the oven, to increase the degree to which the heat came "from above." That's probably why I chose black cast iron, was to capture more of the radiant heat.)
https://api.stackexchange.com
I know mathematically the answer to this question is yes, and it's very obvious to see that the dimensions of a ratio cancel out, leaving behind a mathematically dimensionless quantity. However, I've been writing a c++ dimensional analysis library (the specifics of which are out of scope), which has me thinking about the problem because I decided to handle angle units as dimensioned quantities, which seemed natural to enable the unit conversion with degrees. The overall purpose of the library is to disallow operations that don't make sense because they violate the rules of dimensional analysis, e.g. adding a length quantity to an area quantity, and thus provide some built-in sanity checking to the computation. Treating radians as units made sense because of some of the properties that dimensioned quantities seemed to me to have: The sum and difference of two quantities with the same dimension have the same physical meaning as both quantities separately. Quantities with the same dimension are meaningfully comparable to each other, and not meaningfully comparable (directly) to quantities with different dimensions. Dimensions may have different units that are scalar multiple (sometimes with a datum shift). If the angle is treated as a dimension, my 3 made up properties are satisfied, and everything "makes sense" to me. I can't help thinking that the fact that radians are a ratio of lengths (SI defines them as m/m) is actually critically important, even though the length is cancelled out. For example, though radians and steradians are both dimensionless, it would be a logical error to take their sum. I also can't see how a ratio of something like (kg/kg) could be described as an "angle". This seems to imply to me that not all dimensionless units are compatible, which seems analogous to how units with different dimensions are not compatible. And if not all dimensionless units are compatible, then the dimensionless "dimension" would violate made-up property #1 and cause me a lot of confusion. However, treating radians as having dimension also has a lot of issues, because now your trig functions have to be written in terms of $\cos(\text{angleUnit}) = \text{dimensionless unit}$ even though they are analytic functions (although I'm not convinced that's bad). Small-angle assumptions in this scheme would be defined as performing implicit unit conversions, which is logical given our trig function definitions but incompatible with how many functions are defined, especially since many authors neglect to mention they are making those assumptions. So I guess my question is: are all dimensionless quantities, but specifically angle quantities, really compatible with all other dimensionless quantities? And if not, don't they actually have dimension or at least the properties of dimension?
The answers are no and no. Being dimensionless or having the same dimension is a necessary condition for quantities to be "compatible", it is not a sufficient one. What one is trying to avoid is called category error. There is analogous situation in computer programming: one wishes to avoid putting values of some data type into places reserved for a different data type. But while having the same dimension is certainly required for values to belong to the same "data type", there is no reason why they can not be demarcated by many other categories in addition to that. Newton meter is a unit of both torque and energy, and joules per kelvin of both entropy and heat capacity, but adding them is typically problematic. The same goes for adding proverbial apples and oranges measured in "dimensionless units" of counting numbers. Actually, the last example shows that the demarcation of categories depends on a context, if one only cares about apples and oranges as objects it might be ok to add them. Dimension is so prominent in physics because it is rarely meaningful to mix quantities of different dimensions, and there is a nice calculus (dimensional analysis) for keeping track of it. But it also makes sense to introduce additional categories to demarcate values of quantities like torque and energy, even if there may not be as nice a calculus for them. As your own examples show it also makes sense to treat radians differently depending on context: take their category ("dimension") viz. steradians or counting numbers into account when deciding about addition, but disregard it when it comes to substitution into transcendental functions. Hertz is typically used to measure wave frequency, but because cycles and radians are officially dimensionless it shares dimension with the unit of angular velocity, radian per second, radians also make the only difference between amperes for electric current and ampere-turns for magnetomotive force. Similarly, dimensionless steradians are the only difference between lumen and candela, while luminous intensity and flux are often distinguished. So in those contexts it might also make sense to treat radians and steradians as "dimensional". In fact, radians and steradians were in a class of their own as "supplementary units" of SI until 1995. That year the International Bureau on Weights and Measures (BIPM) decided that "ambiguous status of the supplementary units compromises the internal coherence of the SI", and reclassified them as "dimensionless derived units, the names and symbols of which may, but need not, be used in expressions for other SI derived units, as is convenient", thus eliminating the class of supplementary units. The desire to maintain a general rule that arguments of transcendental functions must be dimensionless might have played a role, but this shows that dimensional status is to a degree decided by convention rather than by fact. In the same vein, ampere was introduced as a new base unit into MKS system only in 1901, and incorporated into SI even later. As the name suggests, MKS originally made do with just meters, kilograms, and seconds as base units, this required fractional powers of meters and kilograms in the derived units of electric current however. As @dmckee pointed out energy and torque can be distinguished as scalars and pseudo-scalars, meaning that under the orientation reversing transformations like reflections, the former keep their value while the latter switch sign. This brings up another categorization of quantities that plays a big role in physics, by transformation rules under coordinate changes. Among vectors there are "true" vectors (like velocity), covectors (like momentum), and pseudo-vectors (like angular momentum), in fact all tensor quantities are categorized by representations of orthogonal (in relativity Lorentz) group. This also comes with a nice calculus describing how tensor types combine under various operations (dot product, tensor product, wedge product, contractions, etc.). One reason for rewriting Maxwell's electrodynamics in terms of differential forms is to keep track of them. This becomes important when say the background metric is not Euclidean, because the identification of vectors and covectors depends on it. Different tensor types tend to have different dimensions anyway, but there are exceptions and the categorizations are clearly independent. But even tensor type may not be enough. Before Joule's measurements of the mechanical equivalent of heat in 1840s the quantity of heat (measured in calories) and mechanical energy (measured in derived units) had two different dimensions. But even today one may wish to keep them in separate categories when studying a system where mechanical and thermal energy are approximately separately conserved, the same applies to Einstein's mass energy. This means that categorical boundaries are not set in stone, they may be erected or taken down both for practical expediency or due to a physical discovery. Many historical peculiarities in the choice and development of units and unit systems are described in Klein's book The Science of Measurement.
https://api.stackexchange.com
Some sources describe antimatter as just like normal matter, but "going backwards in time". What does that really mean? Is that a good analogy in general, and can it be made mathematically precise? Physically, how could something move backwards in time?
To the best of my knowledge, most physicists don't believe that antimatter is actually matter moving backwards in time. It's not even entirely clear what would it really mean to move backwards in time, from the popular viewpoint. If I'm remembering correctly, this idea all comes from a story that probably originated with Richard Feynman. At the time, one of the big puzzles of physics was why all instances of a particular elementary particle (all electrons, for example) are apparently identical. Feynman had a very hand-wavy idea that all electrons could in fact be the same electron, just bouncing back and forth between the beginning of time and the end. As far as I know, that idea never developed into anything mathematically grounded, but it did inspire Feynman and others to calculate what the properties of an electron moving backwards in time would be, in a certain precise sense that emerges from quantum field theory. What they came up with was a particle that matched the known properties of the positron. Just to give you a rough idea of what it means for a particle to "move backwards in time" in the technical sense: in quantum field theory, particles carry with them amounts of various conserved quantities as they move. These quantities may include energy, momentum, electric charge, "flavor," and others. As the particles move, these conserved quantities produce "currents," which have a direction based on the motion and sign of the conserved quantity. If you apply the time reversal operator (which is a purely mathematical concept, not something that actually reverses time), you reverse the direction of the current flow, which is equivalent to reversing the sign of the conserved quantity, thus (roughly speaking) turning the particle into its antiparticle. For example, consider electric current: it arises from the movement of electric charge, and the direction of the current is a product of the direction of motion of the charge and the sign of the charge. $$\vec{I} = q\vec{v}$$ Positive charge moving left ($+q\times -v$) is equivalent to negative charge moving right ($-q\times +v$). If you have a current of electrons moving to the right, and you apply the time reversal operator, it converts the rightward velocity to leftward velocity ($-q\times -v$). But you would get the exact same result by instead converting the electrons into positrons and letting them continue to move to the right ($+q\times +v$); either way, you wind up with the net positive charge flow moving to the right. By the way, optional reading if you're interested: there is a very basic (though hard to prove) theorem in quantum field theory, the TCP theorem, that says that if you apply the three operations of time reversal, charge conjugation (switch particles and antiparticles), and parity inversion (mirroring space), the result should be exactly equivalent to what you started with. We know from experimental data that, under certain exotic circumstances, the combination of charge conjugation and parity inversion does not leave all physical processes unchanged, which means that the same must be true of time reversal: physics is not time-reversal invariant. Of course, since we can't actually reverse time, we can't test in exactly what manner this is true.
https://api.stackexchange.com
I'd like to do some hobbyist soldering at home, and would like to make sure I don't poison those living with me (especially small children). Lead-free seems necessary - what other features should I look for in solder? Are the different types of solder roughly the same in terms of safety (breathing the fumes, vapor fallout, etc.)? Is there more I should do to keep the place clean besides having a filter fan and wiping down the work surface when finished?
What type of solder is safest for home (hobbyist) use? This advice is liable to be met with doubt and even derision by some - by all means do your own checks, but please at least think about what I write here: I have cited a number of references below which give guidelines for soldering. These are as applicable for lead-free solders as for lead based solders. If you decide after reading the following not to trust lead based solders, despite my advice, then the guidelines will still prove useful. It is widely know that the improper handling of metallic lead can cause health problems. However, it is widely understood currently and historically that use of tin-lead solder in normal actual soldering applications has essentially no negative health impact. Handling of the lead based solder, as opposed to the actual soldering, needs to be done sensibly but this is easily achieved with basic common sense procedures. While some electrical workers do have mildly increased epidemiological incidences of some diseases, these appear to be related to electric field exposure - and even then the correlations are so small as to generally be statistically insignificant. Lead metal has a very low vapor pressure and when exposed at room temperatures essentially none is inhaled. At soldering temperatures vapor levels are still essentially zero. Tin lead solder is essentially safe if used anything like sensibly. While some people express doubts about its use in any manner, these are not generally well founded in formal medical evidence or experience. While it IS possible to poison yourself with tin-lead solder, taking even very modest and sensible precautions renders the practice safe for the user and for others in their household. While you would not want to allow children to suck it, anything like reasonable precautions are going to result in its use not being an issue. A significant proportion of lead which is "ingested" (taken orally or eaten) will be absorbed by the body. BUT you will acquire essentially no ingested lead from soldering if you don't eat it, don't suck solder and wash your hands after soldering. Smoking while soldering is liable to be even unwiser than usual. It is widely accepted that inhaled lead from soldering is not at a dangerous level. The majority of inhaled lead is absorbed by the body. BUT the vapor pressure of lead at soldering temperatures is so low that there is essentially no lead vapor in the air while soldering. Sticking a soldering iron up your nose (hot or cold) is liable to damage your health but not due to the effects of lead. The vapor pressure of lead at 330 °C (VERY hot for solder) / 600 Kelvin is about 10⁻⁸ mm of mercury. Lead = "Pb" crosses x-axis at 600K on lower graph here. These are interesting and useful graphs of the vapor pressure with temperatures of many elements. (By comparison, Zinc has about 1,000,000 times as high a vapor pressure at the same temperature, and Cadmium (which should definitely be avoided) 10,000,000 times as high. Atmospheric pressure is ~ 760 mm of Hg so lead vapor pressure at a VERY hot iron temperature is about 1 part in 10¹¹ or one part per 100 billion. The major problems with lead are caused either by its release into the environment where it can be converted to more soluble forms and introduced into the food chain, or by its use in forms which are already soluble or which are liable to be ingested. So, lead paint on toys or nursery furniture, lead paint on houses which gets turned into sanding dust or paint flakes, lead as an additive in petrol which gets disseminated in gaseous and soluble forms or lead which ends up in land fills are all forms which cause real problems and which have led to bans on lead in many situations. Lead in solder is bad for the environment because of where it is liable to end up when it is disposed of. This general prohibition has lead to a large degree of misunderstanding about its use "at the front end". If you insist on regularly vaporising lead in close proximity to your person by e.g. firing a handgun frequently, then you should take precautions re vapor inhalation. Otherwise, common sense is very likely to be good enough. Washing your hands after soldering is a wise precaution but more likely to be useful for removal of trace solid lead particles. Use of a fume extractor & filter is wise - but I'd be far more worried about the resin or flux smoke than of lead vapor. Sean Breheney notes: " There IS a significant danger associated with inhaling the fumes of certain fluxes (including rosin) and therefore fume extraction or excellent ventilation is, in my opinion, essential for anyone doing soldering more often than, say, 1 hour per week. I generally have trained myself to inhale when the fumes are not being generated and exhale slowly while actually soldering - but that is only adequate for very small jobs and I try to remember to use a fume extractor for larger ones. (Added July 2021) Note that there are MANY documents on the web which state that lead solder is hazardous. Few or none try to explain why this is said to be the case. Soldering precautions sheet. They note: Potential exposure routes from soldering include ingestion of lead due to surface contamination. The digestive system is the primary means by which lead can be absorbed into the human body. Skin contact with lead is, in and of itself, harmless, but getting lead dust on your hands can result in it being ingested if you don’t wash your hands before eating, smoking, etc. An often overlooked danger is the habit of chewing fingernails. The spaces under the fingernails are great collectors of dirt and dust. Almost everything that is handled or touched may be found under the finger nails. Ingesting even a small amount of lead is dangerous because it is a cumulative poison which is not excreted by normal bodily function. Lead soldering safety guidelines Standard advice Their comments on lead fumes are rubbish. FWIW - the vapor pressure of lead is given by $$\log_{10}p(mm) = -\frac{10372}{T} - \log_{10}T - 11.35$$ Quoted from The Vapor Pressures of Metals; a New Experimental Method Wikipedia - Vapor pressure For more on soldering in general see Better soldering Lead spatter and inhalation & ingestion It's been suggested that the statement: "The majority of inhaled lead is absorbed by the body. BUT the vapor pressure of lead at soldering temperatures is so low that there is essentially no lead vapor in the air while soldering." is not relevant, as it's suggested that Vapor pressure isn't important if the lead is being atomized into droplets that you can then inhale. Look around the soldering iron and there's lead dust everywhere. In response: "Inhalation" there referred to lead rendered gaseous - usually by chemical combination. eg the use of Tetraethyl lead in petrol resulted in gaseous lead compounds not direcly from the TEL itself but from Wikipedia Tetraethyllead page: The Pb and PbO would quickly over-accumulate and destroy an engine. For this reason, the lead scavengers 1,2-dibromoethane and 1,2-dichloroethane are used in conjunction with TEL—these agents form volatile lead(II) bromide and lead(II) chloride, respectively, which are flushed from the engine and into the air. In engines this process occurs at far higher temperatures than exist in soldering and there is no intentional process which produces volatile lead compounds. (The exceedingly unfortunate may discover a flux which contains substances like the above lead scavenging halides, but by the very nature of flux this seems vanishingly unlikely in the real world.). Lead in metallic droplets at soldering temperatures does not come close to being melted or vaporised at anything like significant partial pressures (see comments and references above) and if any enters the body it counts as 'ingested', not inhaled. Basic precautions against ingestion are widely recommended, as mentioned above. Washing of hands, not smoking while soldering and not licking lead has been noted as sensible. For lead "spatter" to qualify for direct ingestion it would need to ballistically enter the mouth or nose while soldering. It's conceivable that some may do this but if any does the quantity is very small. It's generally recognised both historically and currently that the actual soldering process is not what's hazardous. A significant number of webpages do state that lead from solder is vaporized by soldering and that dangerous quantities of lead can be inhaled. On EVERY such page I have looked at there are no references to anything like reputable sources and in almost every such case there are no references at all. The general RoHS prohibitions and the undoubted dangers that lead poses in appropriate circumstances has lead to a cachet of urban legend and spurious comments without any traceable foundations. And again ... It was suggested that: Anyone who's sneezed in a dusty room knows that it doesn't have to enter the nose or mouth "ballistically". Any time solder splatters or flux pops, it creates tiny droplets of lead that solidify to dust. Small enough particles of dust can be airborne and small exposures over years accumulate in the body. "Lead dust can form when lead-based paint is dry scraped, dry sanded, or heated. Lead chips and dust can get on surfaces and objects that people touch. Settled lead dust can re-enter the air when people vacuum, sweep or walk through it." In response: A quality reference, or a few, that indicated that air borne dust can be produced in significant quantity by soldering would go a long way to establishing the assertions. Finding negative evidence is, as ever, harder. There is no question about the dangers from lead based paints, whether form airborne dust from sanding, children sucking lead painted objects or surface dust produced - all these are extremely well documented. Lead in a metallic alloy for soldering is an entirely different animal. I have many decades of personal soldering experience experience and a reasonable awareness of industry experience. Dusty rooms we all know about, but that has no link to whether solder does or doesn't produce lead dust. Soldering can produce small lead particles, but these appear to be metallic alloyed lead. "Lead" dust from paint is liable to contain lead oxide or occasionally other lead based substances. Such dust may indeed be subject to aerial transmission if finely enough divided, but this provides no information about how metallic lead performs in dust production. I am unaware of discernible "Lead dust" occurring from 'popping flux', and I'm unaware of any mechanism that would allow mechanically small lead droplets to achieve a low enough density to float in air in the normal sense. Brownian motion could loft metallic lead particles of a small enough size. I've not seen any evidence (or found any references), that suggest that small enough particles are formed in measurable quantities. Interestingly - this answer had 2 downvotes - now it has one. Somebody changed their mind. Thanks. Somebody didn't. Maybe they'd like to tell me why? The aim is to be balanced and objective and as factual as possible. If it falls short please advise. ___________________________________________________________ Added 2020: SUCKING SOLDER? I remember biting solder when I was a kid and for about 2 years I wouldn't wash my hands after soldering. Will the effects show up in the future?? I can only give you a layman's opinion. I'm not qualified to give medical advice. I'd GUESS it's probably OK BUT I don't know. I suspect that the effects are limited due to insolubility of lead - BUT lead poisoning from finely divided lead such as in paint is a significant poisoning path. You can be tested for lead in the blood very easily (it requires one drop of blood) and it's probably worth doing. Internet diagnosis is, as I'm sure you know, a very poor substitute for proper medical advice. That said Here is Mayo Clinic's page on Lead poisoning symptoms & causes. And Here is their page on diagnosis and treatment. Mayo Clinic is one of the better sources for medical advice but, even then, it certainly does not replace proper medical advice.
https://api.stackexchange.com
I was thinking yesterday about insects (as there was a spider in the house, and I couldn't help but think of anything else, even though they aren't insects), and I started to wonder if ants sleep? After thinking about it for a while I decided that they might sleep, but then what would be the purpose of sleeping for them? My limited understanding of the need of sleep is that it is used for the brain to compartmentalise the events of the day and allow memories to be formed. But ants don't really have to think about much during the day, given that they act more as a collective than an individual. Or in the case of other insects, they have simpler more instinctive brains which rely on taxis, reflexes and kineses. So, do ants and other insects sleep (or do they have a different type of sleep to us) and what would the purpose of it be for them?
A quick search on Web of Science yields "Polyphasic Wake/Sleep Episodes in the Fire Ant, Solenopsis Invicta" (Cassill et al., 2009, @Mike Taylor found an accessable copy here) as one of the first hits. The main points from the abstract: Yes, ants sleep. indicators of deep sleep: ants are non-responsive to contact by other ants and antennae are folded rapid antennal movement (RAM sleep) Queens have about 92 sleep episodes per day, each 6 minutes long. Queens synchronize their wake/sleep cycles. Workers have about 253 sleep episodes per day, each 1.1 minutes long. "Activity episodes were unaffected by light/dark periods." If you study the paper you might find more information in its introduction or in the references regarding why ants sleep, although there doesn't seem to be scientific consens. The abstract only says that the shorter total sleeping time of the workers is likely related to them being disposable.
https://api.stackexchange.com
Carbon is well known to form single, double, and triple $\ce{C-C}$ bonds in compounds. There is a recent report (2012) that carbon forms a quadruple bond in diatomic carbon, $\ce{C2}$. The excerpt below is taken from that report. The fourth bond seems pretty odd to me. $\ce{C2}$ and its isoelectronic molecules $\ce{CN+}$, BN and $\ce{CB-}$ (each having eight valence electrons) are bound by a quadruple bond. The bonding comprises not only one σ- and two π-bonds, but also one weak ‘inverted’ bond, which can be characterized by the interaction of electrons in two outwardly pointing sp hybrid orbitals. According to Shaik, the existence of the fourth bond in $\ce{C2}$ suggests that it is not really diradical... If $\ce{C2}$ were a diradical it would immediately form higher clusters. I think the fact that you can isolate $\ce{C2}$ tells you it has a barrier, small as it may be, to prevent that. Molecular orbital theory for dicarbon, on the other hand, predicts a C-C double bond in $\ce{C2}$ with 2 pairs of electrons in $\pi$ bonding orbitals and a bond order of two. "The bond dissociation energies (BDE) of $\ce{B2, C2}$, and $\ce{N2}$ show increasing BDE consistent with single, double, and triple bonds." (Ref) So this model of the $\ce{C2}$ molecule seems quite reasonable. My questions, since this is most definitely not my area of expertise: Is dicarbon found naturally in any quantity and how stable is it? Is it easy to make in the lab? (The Wikipedia article reports it in stellar atmospheres, electric arcs, etc.) Is there good evidence for the presence of a quadruple bond in $\ce{C2}$ that wouldn't be equally well explained by double bonding?
Okay, this is not so much of an answer as it is a summary of my own progress on this topic after giving it some thought. I don't think it's a settled debate in the community yet, so I don't feel so much ashamed about it :) A few of the things worthy of note are: The bond energy found by the authors for this fourth bond is $\pu{13.2 kcal/mol}$, i.e. about $\pu{55 kJ/mol}$. This is very weak for a covalent bond. You can compare it to other values here, or to the energies of the first three bonds in triple-bonded carbon, which are respectively $348, 266$, and $\pu{225 kJ/mol}$. This fourth bond is actually even weaker than the strongest of hydrogen bonds ($\ce{F\bond{...}H–F}$, at $\pu{160 kJ/mol}$). Another point of view on this article could thus be: “valence bond necessarily predicts a quadruple bond, and it was now precisely calculated and found to be quite weak.” The findings of this article are consistent with earlier calculations using other quantum chemistry methods (e.g. the DFT calculations in ref. 48 of the Nature Chemistry paper) which have found a bond order between 3 and 4 for molecular dicarbon. However, the existence of this quadruple bonds is somewhat at odds with the cohesive energy of gas-phase dicarbon, which according to Wikipedia is $\pu{6.32 eV}$, i.e. $\pu{609 kJ/mol}$. This latter value is much more in line with typical double bonds, reported at an average of $\pu{614 kJ/mol}$. This is still a bit of a misery to me…
https://api.stackexchange.com
The most notable characteristic of polytetrafluoroethylene (PTFE, DuPont's Teflon) is that nothing sticks to it. This complete inertness is attributed to the fluorine atoms completely shielding the carbon backbone of the polymer. If nothing indeed sticks to Teflon, how might one coat an object (say, a frying pan) with PTFE?
It has to be so common a question that the answer is actually given in various places on Dupont's own website (Dupont are the makers of Teflon): “If nothing sticks to Teflon®, then how does Teflon® stick to a pan?" Nonstick coatings are applied in layers, just like paint. The first layer is the primer—and it's the special chemistry in the primer that makes it adhere to the metal surface of a pan. And from this other webpage of theirs: The primer (or primers, if you include the “mid coat” in the picture above) adheres to the roughened surface, often obtained by sandblasting, very strongly: it's chemisorption, and the primer chemical nature is chosen as to obtain strong bonding to both the metal surface. Then, the PTFE chain extremities create bonds with the primer. And thus, it stays put.
https://api.stackexchange.com
We know that $\mathbf A$ is symmetric and positive-definite. We know that $\mathbf B$ is orthogonal: Question: is $\mathbf B \cdot\mathbf A \cdot\mathbf B^\top$ symmetric and positive-definite? Answer: Yes. Question: Could a computer have told us this? Answer: Probably. Are there any symbolic algebra systems (like Mathematica) that handle and propagate known facts about matrices? Edit: To be clear I'm asking this question about abstractly defined matrices. I.e. I don't have explicit entries for $A$ and $B$, I just know that they are both matrices and have particular attribues like symetric, positive definite, etc....
Edit: This is now in SymPy $ isympy In [1]: A = MatrixSymbol('A', n, n) In [2]: B = MatrixSymbol('B', n, n) In [3]: context = Q.symmetric(A) & Q.positive_definite(A) & Q.orthogonal(B) In [4]: ask(Q.symmetric(B*A*B.T) & Q.positive_definite(B*A*B.T), context) Out[4]: True Older answer that shows other work So after looking into this for a while this is what I've found. The current answer to my specific question is "No, there is no current system that can answer this question." There are however a few things that seem to come close. First, Matt Knepley and Lagerbaer both pointed to work by Diego Fabregat and Paolo Bientinesi. This work shows both the potential importance and the feasibility of this problem. It's a good read. Unfortunately I'm not certain exactly how his system works or what it is capable of (if anyone knows of other public material on this topic do let me know). Second, there is a tensor algebra library written for Mathematica called xAct which handles symmetries and such symbolically. It does some things very well but is not tailored to the special case of linear algebra. Third, these rules are written down formally in a couple of libraries for Coq, an automated theorem proving assistant (Google search for coq linear/matrix algebra to find a few). This is a powerful system which unfortunately seems to require human interaction. After talking with some theorem prover people they suggest looking into logic programming (i.e. Prolog, which Lagerbaer also suggested) for this sort of thing. To my knowledge this hasn't yet been done - I may play with it in the future. Update: I've implemented this using the Maude system. My code is hosted on github
https://api.stackexchange.com
Oxygen is a rather boring element. It has only two allotropes, dioxygen and ozone. Dioxygen has a double bond, and ozone has a delocalised cloud, giving rise to two "1.5 bonds". On the other hand, sulfur has many stable allotropes, and a bunch of unstable ones as well. The variety of allotropes, is mainly due to the ability of sulfur to catenate. But, sulfur does not have a stable diatomic allotrope at room temperature. I, personally would expect disulfur to be more stable than dioxygen, due to the possibility of $\mathrm{p}\pi\text{-}\mathrm{d}\pi$ back-bonding. So, why do sulfur and oxygen have such opposite properties with respect to their ability to catenate?
First, a note: while oxygen has fewer allotropes than sulfur, it sure has more than two! These include $\ce{O}$, $\ce{O_2}$, $\ce{O_3}$, $\ce{O_4}$, $\ce{O_8}$, metallic $\ce{O}$ and four other solid phases. Many of these actually have a corresponding sulfur variant. However, you are right in a sense that sulfur has more tendency to catenate… let's try to see why! Here are the values of the single and double bond enthalpies: $$\begin{array}{ccc} \hline \text{Bond} & \text{Dissociation energy / }\mathrm{kJ~mol^{-1}} \\ \hline \ce {O-O} & 142 \\ \ce {S–S} & 268 \\ \ce {O=O} & 499 \\ \ce {S=S} & 352 \\ \hline \end{array}$$ This means that $\ce{O=O}$ is stronger than $\ce{S=S}$, while $\ce{O–O}$ is weaker than $\ce{S–S}$. So, in sulfur, single bonds are favoured and catenation is easier than in oxygen compounds. It seems that the reason for the weaker $\ce{S=S}$ double bonds has its roots in the size of the atom: it's harder for the two atoms to come at a small enough distance, so that the $\mathrm{3p}$ orbitals overlap is small and the $\pi$ bond is weak. This is attested by looking down the periodic table: $\ce{Se=Se}$ has an even weaker bond enthalpy of $\ce{272 kJ/mol}$. There is more in-depth discussion of the relative bond strengths in this question. While not particularly stable, it's actually also possible for oxygen to form discrete molecules with the general formula $\ce{H-O_n-H}$; water and hydrogen peroxide are the first two members of this class, but $n$ goes up to at least $5$. These "hydrogen polyoxides" are described further in this question.
https://api.stackexchange.com
Related question: State of the Mac OS in Scientific Computing and HPC A significant number of software packages in computational science are written in Fortran, and Fortran isn't going away. A Fortran compiler is also required to build other software packages (one notable example being SciPy). However, Mac OS X does not include a Fortran compiler. How should I install a Fortran compiler on my machine?
Pick your poison. I recommend using Homebrew. I have tried all of these methods except for "Fink" and "Other Methods". Originally, I preferred MacPorts when I wrote this answer. In the two years since, Homebrew has grown a lot as a project and has proved more maintainable than MacPorts, which can require a lot of PATH hacking. Installing a version that matches system compilers If you want the version of gfortran to match the versions of gcc, g++, etc. installed on your machine, download the appropriate version of gfortran from here. The R developers and SciPy developers recommend this method. Advantages: Matches versions of compilers installed with XCode or with Kenneth Reitz's installer; unlikely to interfere with OS upgrades; coexists nicely with MacPorts (and probably Fink and Homebrew) because it installs to /usr/bin. Doesn't clobber existing compilers. Don't need to edit PATH. Disadvantages: Compiler stack will be really old. (GCC 4.2.1 is the latest Apple compiler; it was released in 2007.) Installs to /usr/bin. Installing a precompiled, up-to-date binary from HPC Mac OS X HPC Mac OS X has binaries for the latest release of GCC (at the time of this writing, 4.8.0 (experimental)), as well as g77 binaries, and an f2c-based compiler. The PETSc developers recommend this method on their FAQ. Advantages: With the right command, installs in /usr/local; up-to-date. Doesn't clobber existing system compilers, or the approach above. Won't interfere with OS upgrades. Disadvantages: Need to edit PATH. No easy way to switch between versions. (You could modify the PATH, delete the compiler install, or kludge around it.) Will clobber other methods of installing compilers in /usr/local because compiler binaries are simply named 'gcc', 'g++', etc. (without a version number, and without any symlinks). Use MacPorts MacPorts has a number of versions of compilers available for use. Advantages: Installs in /opt/local; port select can be used to switch among compiler versions (including system compilers). Won't interfere with OS upgrades. Disadvantages: Installing ports tends to require an entire "software ecosystem". Compilers don't include debugging symbols, which can pose a problem when using a debugger, or installing PETSc. (Sean Farley proposes some workarounds.) Also requires changing PATH. Could interfere with Homebrew and Fink installs. (See this post on SuperUser.) Use Homebrew Homebrew can also be used to install a Fortran compiler. Advantages: Easy to use package manager; installs the same Fortran compiler as in "Installing a version that matches system compilers". Only install what you need (in contrast to MacPorts). Could install a newer GCC (4.7.0) stack using the alternate repository homebrew-dupes. Disadvantages: Inherits all the disadvantages from "Installing a version that matches system compilers". May need to follow the Homebrew paradigm when installing other (non-Homebrew) software to /usr/local to avoid messing anything up. Could interfere with MacPorts and Fink installs. (See this post on SuperUser.) Need to change PATH. Installs could depend on system libraries, meaning that dependencies for Homebrew packages could break on an OS upgrade. (See this article.) I wouldn't expect there to be system library dependencies when installing gfortran, but there could be such dependencies when installing other Homebrew packages. Use Fink In theory, you can use Fink to install gfortran. I haven't used it, and I don't know anyone who has (and was willing to say something positive). Other methods Other binaries and links are listed on the GFortran wiki. Some of the links are already listed above. The remaining installation methods may or may not conflict with those described above; use at your own risk.
https://api.stackexchange.com
In several different contexts we invoke the central limit theorem to justify whatever statistical method we want to adopt (e.g., approximate the binomial distribution by a normal distribution). I understand the technical details as to why the theorem is true but it just now occurred to me that I do not really understand the intuition behind the central limit theorem. So, what is the intuition behind the central limit theorem? Layman explanations would be ideal. If some technical detail is needed please assume that I understand the concepts of a pdf, cdf, random variable etc but have no knowledge of convergence concepts, characteristic functions or anything to do with measure theory.
I apologize in advance for the length of this post: it is with some trepidation that I let it out in public at all, because it takes some time and attention to read through and undoubtedly has typographic errors and expository lapses. But here it is for those who are interested in the fascinating topic, offered in the hope that it will encourage you to identify one or more of the many parts of the CLT for further elaboration in responses of your own. Most attempts at "explaining" the CLT are illustrations or just restatements that assert it is true. A really penetrating, correct explanation would have to explain an awful lot of things. Before looking at this further, let's be clear about what the CLT says. As you all know, there are versions that vary in their generality. The common context is a sequence of random variables, which are certain kinds of functions on a common probability space. For intuitive explanations that hold up rigorously I find it helpful to think of a probability space as a box with distinguishable objects. It doesn't matter what those objects are but I will call them "tickets." We make one "observation" of a box by thoroughly mixing up the tickets and drawing one out; that ticket constitutes the observation. After recording it for later analysis we return the ticket to the box so that its contents remain unchanged. A "random variable" basically is a number written on each ticket. In 1733, Abraham de Moivre considered the case of a single box where the numbers on the tickets are only zeros and ones ("Bernoulli trials"), with some of each number present. He imagined making $n$ physically independent observations, yielding a sequence of values $x_1, x_2, \ldots, x_n$, all of which are zero or one. The sum of those values, $y_n = x_1 + x_2 + \ldots + x_n$, is random because the terms in the sum are. Therefore, if we could repeat this procedure many times, various sums (whole numbers ranging from $0$ through $n$) would appear with various frequencies--proportions of the total. (See the histograms below.) Now one would expect--and it's true--that for very large values of $n$, all the frequencies would be quite small. If we were to be so bold (or foolish) as to attempt to "take a limit" or "let $n$ go to $\infty$", we would conclude correctly that all frequencies reduce to $0$. But if we simply draw a histogram of the frequencies, without paying any attention to how its axes are labeled, we see that the histograms for large $n$ all begin to look the same: in some sense, these histograms approach a limit even though the frequencies themselves all go to zero. These histograms depict the results of repeating the procedure of obtaining $y_n$ many times. $n$ is the "number of trials" in the titles. The insight here is to draw the histogram first and label its axes later. With large $n$ the histogram covers a large range of values centered around $n/2$ (on the horizontal axis) and a vanishingly small interval of values (on the vertical axis), because the individual frequencies grow quite small. Fitting this curve into the plotting region has therefore required both a shifting and rescaling of the histogram. The mathematical description of this is that for each $n$ we can choose some central value $m_n$ (not necessarily unique!) to position the histogram and some scale value $s_n$ (not necessarily unique!) to make it fit within the axes. This can be done mathematically by changing $y_n$ to $z_n = (y_n - m_n) / s_n$. Remember that a histogram represents frequencies by areas between it and the horizontal axis. The eventual stability of these histograms for large values of $n$ should therefore be stated in terms of area. So, pick any interval of values you like, say from $a$ to $b \gt a$ and, as $n$ increases, track the area of the part of the histogram of $z_n$ that horizontally spans the interval $(a, b]$. The CLT asserts several things: No matter what $a$ and $b$ are, if we choose the sequences $m_n$ and $s_n$ appropriately (in a way that does not depend on $a$ or $b$ at all), this area indeed approaches a limit as $n$ gets large. The sequences $m_n$ and $s_n$ can be chosen in a way that depends only on $n$, the average of values in the box, and some measure of spread of those values--but on nothing else--so that regardless of what is in the box, the limit is always the same. (This universality property is amazing.) Specifically, that limiting area is the area under the curve $y = \exp(-z^2/2) / \sqrt{2 \pi}$ between $a$ and $b$: this is the formula of that universal limiting histogram. The first generalization of the CLT adds, When the box can contain numbers in addition to zeros and ones, exactly the same conclusions hold (provided that the proportions of extremely large or small numbers in the box are not "too great," a criterion that has a precise and simple quantitative statement). The next generalization, and perhaps the most amazing one, replaces this single box of tickets with an ordered indefinitely long array of boxes with tickets. Each box can have different numbers on its tickets in different proportions. The observation $x_1$ is made by drawing a ticket from the first box, $x_2$ comes from the second box, and so on. Exactly the same conclusions hold provided the contents of the boxes are "not too different" (there are several precise, but different, quantitative characterizations of what "not too different" has to mean; they allow an astonishing amount of latitude). These five assertions, at a minimum, need explaining. There's more. Several intriguing aspects of the setup are implicit in all the statements. For example, What is special about the sum? Why don't we have central limit theorems for other mathematical combinations of numbers such as their product or their maximum? (It turns out we do, but they are not quite so general nor do they always have such a clean, simple conclusion unless they can be reduced to the CLT.) The sequences of $m_n$ and $s_n$ are not unique but they're almost unique in the sense that eventually they have to approximate the expectation of the sum of $n$ tickets and the standard deviation of the sum, respectively (which, in the first two statements of the CLT, equals $\sqrt{n}$ times the standard deviation of the box). The standard deviation is one measure of the spread of values, but it is by no means the only one nor is it the most "natural," either historically or for many applications. (Many people would choose something like a median absolute deviation from the median, for instance.) Why does the SD appear in such an essential way? Consider the formula for the limiting histogram: who would have expected it to take such a form? It says the logarithm of the probability density is a quadratic function. Why? Is there some intuitive or clear, compelling explanation for this? I confess I am unable to reach the ultimate goal of supplying answers that are simple enough to meet Srikant's challenging criteria for intuitiveness and simplicity, but I have sketched this background in the hope that others might be inspired to fill in some of the many gaps. I think a good demonstration will ultimately have to rely on an elementary analysis of how values between $\alpha_n = a s_n + m_n$ and $\beta_n = b s_n + m_n$ can arise in forming the sum $x_1 + x_2 + \ldots + x_n$. Going back to the single-box version of the CLT, the case of a symmetric distribution is simpler to handle: its median equals its mean, so there's a 50% chance that $x_i$ will be less than the box's mean and a 50% chance that $x_i$ will be greater than its mean. Moreover, when $n$ is sufficiently large, the positive deviations from the mean ought to compensate for the negative deviations in the mean. (This requires some careful justification, not just hand waving.) Thus we ought primarily to be concerned about counting the numbers of positive and negative deviations and only have a secondary concern about their sizes. (Of all the things I have written here, this might be the most useful at providing some intuition about why the CLT works. Indeed, the technical assumptions needed to make the generalizations of the CLT true essentially are various ways of ruling out the possibility that rare huge deviations will upset the balance enough to prevent the limiting histogram from arising.) This shows, to some degree anyway, why the first generalization of the CLT does not really uncover anything that was not in de Moivre's original Bernoulli trial version. At this point it looks like there is nothing for it but to do a little math: we need to count the number of distinct ways in which the number of positive deviations from the mean can differ from the number of negative deviations by any predetermined value $k$, where evidently $k$ is one of $-n, -n+2, \ldots, n-2, n$. But because vanishingly small errors will disappear in the limit, we don't have to count precisely; we only need to approximate the counts. To this end it suffices to know that $$\text{The number of ways to obtain } k \text{ positive and } n-k \text{ negative values out of } n$$ $$\text{equals } \frac{n-k+1}{k}$$ $$\text{times the number of ways to get } k-1 \text{ positive and } n-k+1 \text { negative values.}$$ (That's a perfectly elementary result so I won't bother to write down the justification.) Now we approximate wholesale. The maximum frequency occurs when $k$ is as close to $n/2$ as possible (also elementary). Let's write $m = n/2$. Then, relative to the maximum frequency, the frequency of $m+j+1$ positive deviations ($j \ge 0$) is estimated by the product $$\frac{m+1}{m+1} \frac{m}{m+2} \cdots \frac{m-j+1}{m+j+1}$$ $$=\frac{1 - 1/(m+1)}{1 + 1/(m+1)} \frac{1-2/(m+1)}{1+2/(m+1)} \cdots \frac{1-j/(m+1)}{1+j/(m+1)}.$$ 135 years before de Moivre was writing, John Napier invented logarithms to simplify multiplication, so let's take advantage of this. Using the approximation $$\log\left(\frac{1-x}{1+x}\right) = -2x - \frac{2x^3}{3} + O(x^5),$$ we find that the log of the relative frequency is approximately $$-\frac{2}{m+1}\left(1 + 2 + \cdots + j\right) - \frac{2}{3(m+1)^3}\left(1^3+2^3+\cdots+j^3\right) = -\frac{j^2}{m} + O\left(\frac{j^4}{m^3}\right).$$ Because the error in approximating this sum by $-j^2/m$ is on the order of $j^4/m^3$, the approximation ought to work well provided $j^4$ is small relative to $m^3$. That covers a greater range of values of $j$ than is needed. (It suffices for the approximation to work for $j$ only on the order of $\sqrt{m}$ which asymptotically is much smaller than $m^{3/4}$.) Consequently, writing $$z = \sqrt{2}\,\frac{j}{\sqrt{m}} = \frac{j/n}{1 / \sqrt{4n}}$$ for the standardized deviation, the relative frequency of deviations of size given by $z$ must be proportional to $\exp(-z^2/2)$ for large $m.$ Thus appears the Gaussian law of #3 above. Obviously much more analysis of this sort should be presented to justify the other assertions in the CLT, but I'm running out of time, space, and energy and I've probably lost 90% of the people who started reading this anyway. This simple approximation, though, suggests how de Moivre might originally have suspected that there is a universal limiting distribution, that its logarithm is a quadratic function, and that the proper scale factor $s_n$ must be proportional to $\sqrt{n}$ (as shown by the denominator of the preceding formula). It is difficult to imagine how this important quantitative relationship could be explained without invoking some kind of mathematical information and reasoning; anything less would leave the precise shape of the limiting curve a complete mystery.
https://api.stackexchange.com
How can I evaluate $$\sum_{n=1}^\infty\frac{2n}{3^{n+1}}$$? I know the answer thanks to Wolfram Alpha, but I'm more concerned with how I can derive that answer. It cites tests to prove that it is convergent, but my class has never learned these before. So I feel that there must be a simpler method. In general, how can I evaluate $$\sum_{n=0}^\infty (n+1)x^n?$$
No need to use Taylor series, this can be derived in a similar way to the formula for geometric series. Let's find a general formula for the following sum: $$S_{m}=\sum_{n=1}^{m}nr^{n}.$$ Notice that \begin{align*} S_{m}-rS_{m} & = -mr^{m+1}+\sum_{n=1}^{m}r^{n}\\ & = -mr^{m+1}+\frac{r-r^{m+1}}{1-r} \\ & =\frac{mr^{m+2}-(m+1)r^{m+1}+r}{1-r}. \end{align*} Hence $$S_m = \frac{mr^{m+2}-(m+1)r^{m+1}+r}{(1-r)^2}.$$ This equality holds for any $r$, but in your case we have $r=\frac{1}{3}$ and a factor of $\frac{2}{3}$ in front of the sum. That is \begin{align*} \sum_{n=1}^{\infty}\frac{2n}{3^{n+1}} & = \frac{2}{3}\lim_{m\rightarrow\infty}\frac{m\left(\frac{1}{3}\right)^{m+2}-(m+1)\left(\frac{1}{3}\right)^{m+1}+\left(\frac{1}{3}\right)}{\left(1-\left(\frac{1}{3}\right)\right)^{2}} \\ & =\frac{2}{3}\frac{\left(\frac{1}{3}\right)}{\left(\frac{2}{3}\right)^{2}} \\ & =\frac{1}{2}. \end{align*} Added note: We can define $$S_m^k(r) = \sum_{n=1}^m n^k r^n.$$ Then the sum above considered is $S_m^1(r)$, and the geometric series is $S_m^0(r)$. We can evaluate $S_m^2(r)$ by using a similar trick, and considering $S_m^2(r) - rS_m^2(r)$. This will then equal a combination of $S_m^1(r)$ and $S_m^0(r)$ which already have formulas for. This means that given a $k$, we could work out a formula for $S_m^k(r)$, but can we find $S_m^k(r)$ in general for any $k$? It turns out we can, and the formula is similar to the formula for $\sum_{n=1}^m n^k$, and involves the Bernoulli numbers. In particular, the denominator is $(1-r)^{k+1}$.
https://api.stackexchange.com
I read in this assembly programming tutorial that 8 bits are used for data while 1 bit is for parity, which is then used for detecting parity error (caused by hardware fault or electrical disturbance). Is this true?
A byte of data is eight bits, there may be more bits per byte of data that are used at the OS or even the hardware level for error checking (parity bit, or even a more advanced error detection scheme), but the data is eight bits and any parity bit is usually invisible to the software. A byte has been standardized to mean 'eight bits of data'. The text isn't wrong in saying there may be more bits dedicated to storing a byte of data of than the eight bits of data, but those aren't typically considered part of the byte per se, the text itself points to this fact. You can see this in the following section of the tutorial: Doubleword: a 4-byte (32 bit) data item 4*8=32, it might actually take up 36 bits on the system but for your intents and purposes it's only 32 bits.
https://api.stackexchange.com
I know this question has been asked previously but I cannot find a satisfactory explanation as to why is it so difficult for $\ce{H4O^2+}$ to exist. There are explanations that it is so because of $+2$ charge, but if only that was the reason then existence of species like $\ce{SO4^2-}$ should not have been possible. So, what is exactly the reason that makes $\ce{H4O^2+}$ so unstable?
I myself was always confused why $\ce{H3O^+}$ is so well-known and yet almost nobody talks of $\ce{H4O^2+}$. I mean, $\ce{H3O^+}$ still has a lone pair, right? Why can't another proton just latch onto that? Adding to the confusion, $\ce{H4O^2+}$ is very similar to $\ce{NH4+}$, which again is extremely well-known. Even further, the methanium cation $\ce{CH5+}$ exists (admittedly not something you'll find on a shelf), and that doesn't even have an available lone pair! It is very useful to rephrase the question "why is $\ce{H4O^2+}$ so rare?" into "why won't $\ce{H3O^+}$ accept another proton?". Now we can think of this in terms of an acid-base reaction: $$\ce{H3O^+ + H+ -> H4O^2+}$$ Yes, that's right. In this reaction $\ce{H3O^+}$ is the base, and $\ce{H^+}$ is the acid. Because solvents can strongly influence the acidity of basicity of dissolved compounds, and because inclusion of solvent makes calculations tremendously more complicated, we will restrict ourselves to the gas phase (hence $\ce{(g)}$ next to all the formulas). This means we will be talking about proton affinities. Before we get to business, though, let's start with something more familiar: $$\ce{H2O(g) + H+(g) -> H3O^+(g)}$$ Because this is in the gas phase, we can visualise the process very simply. We start with a lone water molecule in a perfect vacuum. Then, from a very large distance away, a lone proton begins its approach. We can calculate the potential energy of the whole system as a function of the distance between the oxygen atom and the distant proton. We get a graph that looks something like this: For convenience, we can set the potential energy of the system at 0 when the distance is infinite. At very large distances, the lone proton only very slightly tugs the electrons of the $\ce{H2O}$ molecule, but they attract and the system is slightly stabilised. The attraction gets stronger as the lone proton approaches. However, there is also a repulsive interaction, between the lone proton and the nuclei of the other atoms in the $\ce{H2O}$ molecule. At large distances, the attraction is stronger than the repulsion, but this flips around if the distance is too short. The happy medium is where the extra proton is close enough to dive into the molecule's electron cloud, but not close enough to experience severe repulsions with the other nuclei. In short, a lone proton from infinity is attracted to a water molecule, and the potential energy decreases up to a critical value, the bond length. The amount of energy lost is the proton affinity: in this scenario, a mole of water molecules reacting with a mole of protons would release approximately $\mathrm{697\ kJ\ mol^{-1}}$ (values from this table). This reaction is highly exothermic Alright, now for the next step: $$\ce{H3O^+(g) + H+(g) -> H4O^2+(g)}$$ This should be similar, right? Actually, no. There is a very important difference between this reaction and the previous one; the reagents now both have a net positive charge. This means there is now a strong additional repulsive force between the two. In fact, the graph above changes completely. Starting from zero potential at infinity, instead of a slow decrease in potential energy, the lone proton has to climb uphill, fighting a net electrostatic repulsion. However, even more interestingly, if the proton does manage to get close enough, the electron cloud can abruptly envelop the additional proton and create a net attraction. The resulting graph now looks more like this: Very interestingly, the bottom of the "pocket" on the left of the graph (the potential well) can have a higher potential energy than if the lone proton was infinitely far away. This means the reaction is endothermic, but with enough effort, an extra proton can be pushed into the molecule, and it gets trapped in the pocket. Indeed, according to Olah et al., J. Am. Chem. Soc. 1986, 108 (5), pp 1032-1035, the formation of $\ce{H4O^2+}$ in the gas phase was calculated to be endothermic by $\mathrm{248\ kJ\ mol^{-1}}$ (that is, the proton affinity of $\ce{H3O^+}$ is $\mathrm{-248\ kJ\ mol^{-1}}$), but once formed, it has a barrier towards decomposition (the activation energy towards release of a proton) of $\mathrm{184\ kJ\ mol^{-1}}$ (the potential well has a maximum depth of $\mathrm{184\ kJ\ mol^{-1}}$). Due to the fact that $\ce{H4O^2+}$ was calculated to form a potential well, it can in principle exist. However, since it is the product of a highly endothermic reaction, unsurprisingly it is very hard to find. The reality in solution phase is more complicated, but its existence has been physically verified (if indirectly). But why stop here? What about $\ce{H5O^3+}$? $$\ce{H4O^2+(g) + H+(g) -> H5O^3+(g)}$$ I've run a rough calculation myself using computational chemistry software, and here it seems we really do reach a wall. It appears that $\ce{H5O^3+}$ is an unbound system, which is to say that its potential energy curve has no pocket like the ones above. $\ce{H5O^3+}$ could only ever be made transiently, and it would immediately spit out at least one proton. The reason here really is the massive amount of electrical repulsion, combined with the fact that the electron cloud can't reach out to the distance necessary to accommodate another atom. You can make your own potential energy graphs here. Note how depending on the combination of parameters, the potential well can lie at negative potential energies (an exothermic reaction) or positive potential energies (an endothermic reaction). Alternatively, the pocket may not exist at all - these are the unbound systems. EDIT: I've done some calculations of proton affinities/stabilities on several other simple molecules, for comparison. I do not claim the results to be quantitatively correct. $$ \begin{array}{lllll} \text{Species} & \ce{CH4} & \ce{CH5+} & \ce{CH6^2+} & \ce{CH7^3+} & \ce{CH8^4+} \\ \text{Stable in gas phase?} & \text{Yes} & \text{Yes} & \text{Yes} & \text{Yes} & \text{No} \\ \text{Approximate proton affinity}\ (\mathrm{kJ\ mol^{-1}}) & 556 & -246 & -1020 & N/A & N/A \\ \end{array} $$ Notes: Even without a lone pair, methane ($\ce{CH4}$) protonates very exothermically in the gas phase. This is a testament to the enormous reactivity of a bare proton, and the huge difference it makes to not have push a proton into an already positively-charged ion. For most of the seemingly hypercoordinate species in these tables (more than four bonds), the excess hydrogen atoms "pair up" such that it can be viewed as a $\ce{H2}$ molecule binding sideways to the central atom. See the methanium link at the start. $$ \begin{array}{lllll} \text{Species} & \ce{NH3} & \ce{NH4+} & \ce{NH5^2+} & \ce{NH6^3+} \\ \text{Stable in gas phase?} & \text{Yes} & \text{Yes} & \text{Yes} & \text{No} \\ \text{Approximate proton affinity}\ (\mathrm{kJ\ mol^{-1}}) & 896 & -410 & N/A & N/A \\ \end{array} $$ Notes: Even though the first protonation is easier relative to $\ce{CH4}$, the second one is harder. This is likely because increasing the electronegativity of the central atom makes the electron cloud "stiffer", and less accommodating to all those extra protons. The $\ce{NH5^{2+}}$ ion, unlike other ions listed here with more than four hydrogens, appears to be a true hypercoordinate species. Del Bene et al. indicate a five-coordinate square pyramidal structure with delocalized nitrogen-hydrogen bonds. $$ \begin{array}{lllll} \text{Species} & \ce{H2O} & \ce{H3O+} & \ce{H4O^2+} & \ce{H5O^3+} \\ \text{Stable in gas phase?} & \text{Yes} & \text{Yes} & \text{Yes} & \text{No} \\ \text{Approximate proton affinity}\ (\mathrm{kJ\ mol^{-1}}) & 722 & -236 & N/A & N/A \\ \end{array} $$ Notes: The first series which does not accommodate proton hypercoordination. $\ce{H3O+}$ is easier to protonate than $\ce{NH4+}$, even though oxygen is more electronegative. This is because the $\ce{H4O^2+}$ nicely accommodates all protons, while one of the protons in $\ce{NH5^2+}$ has to fight for its space. $$ \begin{array}{lllll} \text{Species} & \ce{HF} & \ce{H2F+} & \ce{H3F^2+} & \ce{H4F^3+} \\ \text{Stable in gas phase?} & \text{Yes} & \text{Yes} & \text{Yes} & \text{No} \\ \text{Approximate proton affinity}\ (\mathrm{kJ\ mol^{-1}}) & 501 & -459 & N/A & N/A \\ \end{array} $$ Notes: Even though $\ce{H3F^2+}$ still formally has a lone pair, its electron cloud is now so stiff that it cannot reach out to another proton even at normal bonding distance. $$ \begin{array}{lllll} \text{Species} & \ce{Ne} & \ce{NeH+} & \ce{NeH2^2+} \\ \text{Stable in gas phase?} & \text{Yes} & \text{Yes} & \text{No} \\ \text{Approximate proton affinity}\ (\mathrm{kJ\ mol^{-1}}) & 204 & N/A & N/A \\ \end{array} $$ Notes: $\ce{Ne}$ is a notoriously unreactive noble gas, but it too will react exothermically with a bare proton in the gas phase. Depending on the definition of electronegativity used, it is possible to determine an electronegativity for $\ce{Ne}$, which turns out to be even higher than $\ce{F}$. Accordingly, its electron cloud is even stiffer. $$ \begin{array}{lllll} \text{Species} & \ce{H2S} & \ce{H3S+} & \ce{H4S^2+} & \ce{H5S^3+} & \ce{H6S^4+} \\ \text{Stable in gas phase?} & \text{Yes} & \text{Yes} & \text{Yes} & \text{Yes} & \text{No} \\ \text{Approximate proton affinity}\ (\mathrm{kJ\ mol^{-1}}) & 752 & -121 & -1080 & N/A & N/A \\ \end{array} $$ Notes: The lower electronegativity and larger size of $\ce{S}$ means its electrons can reach out further and accommodate protons at a larger distance, while reducing repulsions between the nuclei. Thus, in the gas phase, $\ce{H2S}$ is a stronger base than $\ce{H2O}$. The situation is inverted in aqueous solution due to uniquely strong intermolecular interactions (hydrogen bonding) which are much more important for $\ce{H2O}$. $\ce{H3S+}$ also has an endothermic proton affinity, but it is lower than for $\ce{H3O+}$, and therefore $\ce{H4S^2+}$ is easier to make. Accordingly, $\ce{H4S^2+}$ has been detected in milder (though still superacidic!) conditions than $\ce{H4O^2+}$. The larger size and lower electronegativity of $\ce{S}$ once again are shown to be important; the hypercoodinate $\ce{H5S^3+}$ appears to exist, while the oxygen analogue doesn't.
https://api.stackexchange.com
I have VCF files (SNPs & indels) for WGS on 100 samples, but I want to only use a specific subset of 10 of the samples. Is there a relatively easy way to pull out only the 10 samples, while still keeping all of the data for the entire genome? I have a script that allows me to pull out regions of the whole genome for all 100 samples, so if I could do something similar but only put regions for the 10 samples that I want that would be ideal.
Bcftools has sample/individual filtering as an option for most of the commands. You can subset individuals by using the -s or -S option: -s, --samples [^]LIST Comma-separated list of samples to include or exclude if prefixed with "^". Note that in general tags such as INFO/AC, INFO/AN, etc are not updated to correspond to the subset samples. bcftools view is the exception where some tags will be updated (unless the -I, --no-update option is used; see bcftools view documentation). To use updated tags for the subset in another command one can pipe from view into that command. For example: -S, --samples-file FILE File of sample names to include or exclude if prefixed with "^". One sample per line. See also the note above for the -s, --samples option. The command bcftools call accepts an optional second column indicating ploidy (0, 1 or 2) or sex (as defined by --ploidy, for example "F" or "M"), and can parse also PED files. If the second column is not present, the sex "F" is assumed. With bcftools call -C trio, PED file is expected. File formats examples: sample1 1 sample2 2 sample3 2 or sample1 M sample2 F sample3 F or a .ped file (here is shown a minimum working example, the first column is ignored and the last indicates sex: 1=male, 2=female): ignored daughterA fatherA motherA 2 ignored sonB fatherB motherB 1 Example usage: bcftools view -s sample1,sample2 file.vcf > filtered.vcf bcftools view -S sample_file.txt file.vcf > filtered.vcf See the bcftools manpage for more information.
https://api.stackexchange.com
What is the advantage gained by the substitution of thymine for uracil in DNA? I have read previously that it is due to thymine being "better protected" and therefore more suited to the storage role of DNA, which seems fine in theory, but why does the addition of a simple methyl group make the base more well protected?
One major problem with using uracil as a base is that cytosine can be deaminated, which converts it into uracil. This is not a rare reaction; it happens around 100 times per cell, per day. This is no major problem when using thymine, as the cell can easily recognize that the uracil doesn't belong there and can repair it by substituting it by a cytosine again. There is an enzyme, uracil DNA glycosylase, that does exactly that; it excises uracil bases from double-stranded DNA. It can safely do that as uracil is not supposed to be present in the DNA and has to be the result of a base modification. Now, if we would use uracil in DNA it would not be so easy to decide how to repair that error. It would prevent the usage of this important repair pathway. The inability to repair such damage doesn't matter for RNA as the mRNA is comparatively short-lived and any potential errors don't lead to any lasting damage. It matters a lot for DNA as the errors are continued through every replication. Now, this explains why there is an advantage to using thymine in DNA, it doesn't explain why RNA uses uracil. I'd guess it just evolved that way and there was no significant drawback that could be selected against, but there might be a better reason (more difficult biosynthesis of thymine, maybe?). You'll find a bit more information on that in "Molecular Biology of the Cell" from Bruce Alberts et al. in the chapter about DNA repair (from page 267 on in the 4th edition).
https://api.stackexchange.com
In molecular orbital theory, the fact that a bonding and antibonding molecular orbital pair have different energies is accompanied by the fact that the energy by which the bonding is lowered is less than the energy by which antibonding is raised, i.e. the stabilizing energy of each bonding interaction is less than the destabilising energy of antibonding. How is that possible if their sum has to equal the energies of the combining atomic orbitals and conservation of energy has to hold true? "Antibonding is more antibonding than bonding is bonding." For example, the fact that $\ce{He2}$ molecule is not formed can be explained from its MO diagram, which shows that the number of electrons in antibonding and bonding molecular orbitals is the same, and since the destabilizing energy of the antibonding MO is greater than the stabilising energy of bonding MO, the molecule is not formed. This is the common line of reasoning you find at most places.
Mathematical Explanation When examining the linear combination of atomic orbitals (LCAO) for the $\ce{H2+}$ molecular ion, we get two different energy levels, $E_+$ and $E_-$ depending on the coefficients of the atomic orbitals. The energies of the two different MO's are: $$\begin{align} E_+ &= E_\text{1s} + \frac{j_0}{R} - \frac{j' + k'}{1+S} \\ E_- &= E_\text{1s} + \frac{j_0}{R} - \frac{j' - k'}{1-S} \end{align} $$ Note that $j_0 = \frac{e^2}{4\pi\varepsilon_0}$, $R$ is the internuclear distance, $S=\int \chi_\text{A}^* \chi_\text{B}\,\text{d}V$ the overlap integral, $j'$ is a coulombic contribution to the energy and $k'$ is a contribution to the resonance integral, and it does not have a classical analogue. $j'$ and $k'$ are both positive and $j' \gt k'$. You'll note that $j'-k' > 0$. This is why the energy levels of $E_+$ and $E_-$ are not symmetrical with respect to the energy level of $E_\text{1s}$. Intuitive Explanation The intuitive explanation goes along the following line: Imagine two hydrogen nuclei that slowly get closer to each other, and at some point start mixing their orbitals. Now, one very important interaction is the coulomb force between those two nuclei, which gets larger the closer the nuclei come together. As a consequence of this, the energies of the molecular orbitals get shifted upwards, which is what creates the asymmetric image that we have for these energy levels. Basically, you have two positively charged nuclei getting closer to each other. Now you have two options: Stick some electrons between them. Don't stick some electrons between them. If you follow through with option 1, you'll diminish the coulomb forces between the two nuclei somewhat in favor of electron-nucleus attraction. If you go with method 2 (remember that the $\sigma^*_\text{1s}$ MO has a node between the two nuclei), the nuclei feel each other's repulsive forces more strongly. Further Information I highly recommend the following book, from which most of the information above stems: Peter Atkins and Ronald Friedman, In Molecular Quantum Mechanics; $5^\text{th}$ ed., Oxford University Press: Oxford, United Kingdom, 2011 (ISBN-13: 978-0199541423).
https://api.stackexchange.com
Thinking about it: You would never find a "Grounded" multimeter as robust and useful if a path to ground through the multimeter were introduced, modifying the circuit's behaviour and possibly damaging the multimeter with currents. Why are so many oscilloscopes earth referenced? Upon reading some educational material, a majority of the "common mistakes made by students" are placing the grounding clip incorrectly and causing poor results - when the o-scope is just being used as a fancy voltmeter! I've heard of a Tek scope having an isolation transformer within.. however ignoring that, and taking in to account that newer DSOs may have plastic cases (isolated from you most importantly I would assume) could I just remove the earthing pin, and install a 1:1 AC transformer inbetween the o-scope and outlet and be on my merry way probing various hot/neutral/earthed sources with no worries about a path to ground any longer through it?
Oscilloscopes usually require significant power and are physically big. Having a chassis that size, which would include exposed ground on the BNC connectors and the probe ground clips, floating would be dangerous. If you have to look at waveforms in wall-powered equipment, it is generally much better to put the isolation transformer on that equipment instead of on the scope. Once the scope is connected, it provides a ground reference to that part of the circuit so other parts could then be at high ground-referenced voltages, which could be dangerous. However, you'll likely be more careful not to touch parts of the unit under test than the scope. Scopes can also have other paths to ground that are easy to forget. For example, the scope on my bench usually has a permanent RS-232 connection to my computer. It would be easy to float the scope but forget about such things. The scope would actually not be floating. At best a fuse would pop when it is first connected to a wall powered unit under test in the wrong place. Manufacturers could isolate the scope easily enough, but that probably opens them to liability problems. In general, bench equipment is not isolated but hand-held equipment is. If you really need to make isolated measurements often, you can get battery operated handheld scopes.
https://api.stackexchange.com
Which is the fastest library for performing delaunay triangulation of sets with millions if 3D points? Are there also GPU versions available? From the other side, having the voronoi tessellation of the same set of points, would help (in terms of performance) for getting the delaunay triangulation?
For computing three-dimensional Delaunay triangulations (tetrahedralizations, really), TetGen is a commonly used library. For your convenience, here's a little benchmark on how long it takes to compute the terehedralization of a number of random points from the unit cube. For 100,000 points it takes 4.5 seconds on an old Pentium M. (This was done with Mathematica's TetGen interface. I don't know how much overhead it introduces.) Regarding your other question: if you already have the Voronoi tessellation, then getting the Delaunay triangulation is a relatively simple transformation.
https://api.stackexchange.com
If you calculate the area of a rectangle, you just multiply the height and the width and get back the unit squared. Example: 5cm * 10cm = 50cm² In contrast, if you calculate the size of an image, you also multiply the height and the width, but you get back the unit - Pixel - just as it was the unit of the height and width before multiplying. Example: What you actually calculate is the following: 3840 Pixel * 2160 Pixel = 8294400 Pixel What I would expect is: 3840 Pixel * 2160 Pixel = 8294400 Pixel² Why is that the unit at multiplying Pixels is not being squared?
Because "pixel" isn't a unit of measurement: it's an object. So, just like a wall that's 30 bricks wide by 10 bricks tall contains 300 bricks (not bricks-squared), an image that's 30 pixels wide by 10 pixels tall contains 300 pixels (not pixels-squared).
https://api.stackexchange.com
I want to understand the difference between pipeline systems and workflow engines. After reading A Review of Scalable Bioinformatics Pipelines I had a good overview of current bioinformatics pipelines. After some further research I found that there is collection of highly capable workflow engines. My question is then based on what I saw for argo. I would say I can be used as a bioinformatics pipeline as well. So how do bioinformatics pipelines differ from workflow engines?
Great question! Note that from a prescriptive standpoint, the terms pipeline and workflow don't have any strict or precise definitions. But it's still useful to take a descriptive standpoint and discuss how the terms are commonly used in the bioinformatics community. But before talking about pipelines and workflows, it's helpful to talk about programs and scripts. A program or script typically implements a single data analysis task (or set of related tasks). Some examples include the following. FastQC, a program that checks NGS reads for common quality issues Trimmomatic, a program for cleaning NGS reads salmon, a program for estimating transcript abundance from NGS reads a custom R script that uses DESeq2 to perform differential expression analysis A pipeline or a workflow refers to a particular kind of program or script that is intended primarily to combine other independent programs or scripts. For example, I might want to write an RNA-seq workflow that executes Trimmomatic, FastQC, salmon, and the R script using a single command. This is particularly useful if I have to run the same command many times, or if the commands take a long time to run. It's very inconvenient when you have to babysit your computer and wait for step 3 to finish so that you can launch step 4! So when does a program become a pipeline? Honestly, there are no strict rules. In some cases it's clear: the 10-line Python script I wrote to split Fasta files is definitely NOT a pipeline, but the 200-line Python script I wrote that does nothing but invoke 6 other bioinformatics programs definitely IS a pipeline. There are a lot of tools that fall in the middle: they may require running multiple steps in a certain order, or implement their own processing but also delegate processing to other tools. Usually nobody worries too much about whether it's "correct" to call a particular tool a pipeline. Finally, a workflow engine is the software used to actually execute your pipeline/workflow. As mentioned above, general-purpose scripting languages like Bash, Python, or Perl can be used to implement workflows. But there are other languages that are designed specifically for managing workflows. Perhaps the earliest and most popular of these is GNU Make, which was originally intended to help engineers coordinate software compilation but can be used for just about any workflow. More recently there has been a proliferation of tools intended to replace GNU Make for numerous languages in a variety of contexts. The most popular in bioinformatics seems to be Snakemake, which provides a nice balance of simplicity (through shell commands), flexibility (through configuration), and power-user support (through Python scripting). Build scripts written for these tools (i.e., a Makefile or Snakefile) are often called pipelines or workflows, and the workflow engine is the software that executes the workflow. The workflow engines you listed above (such as argo) can certainly be used to coordinate bioinformatics workflows. Honestly though, these are aimed more at the broader tech industry: they involve not just workflow execution but also hardware and infrastructure coordination, and would require a level of engineering expertise/support not commonly available in a bioinformatics setting. This could change, however, as bioinformatics becomes more of a "big data" endeavor. As a final note, I'll mention a few more relevant technologies that I wasn't able to fit above. Docker: managing a consistent software environment across multiple (potentially dozens or hundreds) of computers; Singularity is Docker's less popular step-sister Common Workflow Language (CWL): a generic language for declaring how each step of a workflow is executed, what inputs it needs, what outputs it creates, and approximately what resources (RAM, storage, CPU threads, etc.) are required to run it; designed to write workflows that can be run on a variety of workflow engines Dockstore: a registry of bioinformatics workflows (heavy emphasis on genomics) that includes a Docker container and a CWL specification for each workflow toil: a production-grade workflow engine used primarily for bioinformatics workflows
https://api.stackexchange.com
I asked a relatively simple question. Unfortunately, the answers provoke far more questions! :-( It seems that I don't actually understand RC circuits at all. In particular, why there's an R in there. It seems completely unnecessary. Surely the capacitor is doing all the work? What the heck do you need a resistor for? Clearly my mental model of how this stuff works is incorrect somehow. So let me try to explain my mental model: If you try to pass a direct current through a capacitor, you are just charging the two plates. Current will continue to flow until the capacitor is fully charged, at which point no further current can flow. At this point, the two ends of the wire might as well not even be connected. Until, that is, you reverse the direction of the current. Now current can flow while the capacitor discharges, and continues to flow while the capacitor recharges in the opposite polarity. But after that, once again the capacitor becomes fully charged, and no further current can flow. It seems to me that if you pass an alternating current through a capacitor, one of two things will happen. If the wave period is longer than the time to fully charge the capacitor, the capacitor will spend most of the time fully charged, and hence most of the current will be blocked. But if the wave period is shorter, the capacitor will never reach a fully-charged state, and most of the current will get through. By this logic, a single capacitor on its own is a perfectly good high-pass filter. So... why does everybody insist that you have to have a resistor as well to make a functioning filter? What am I missing? Consider, for example, this circuit from Wikipedia: What the hell is that resistor doing there? Surely all that does is short-circuit all the power, such that no current reaches the other side at all. Next consider this: This is a little strange. A capacitor in parallel? Well... I suppose if you believe that a capacitor blocks DC and passes AC, that would mean that at high frequencies, the capacitor shorts-out the circuit, preventing any power getting through, while at low frequencies the capacitor behaves as if it's not there. So this would be a low-pass filter. Still doesn't explain the random resistor through, uselessly blocking nearly all the power on that rail... Obviously the people who actually design this stuff know something that I don't! Can anyone enlighten me? I tried the Wikipedia article on RC circuits, but it just talks about a bunch of Laplace transform stuff. It's neat that you can do that, I'm trying to understand the underlying physics. And failing! (Similar arguments to the above suggest that an inductor by itself ought to make a good low-pass filter — but again, all the literature seems to disagree with me. I don't know whether that's worthy of a separate question or not.)
Let's try this Wittgenstein's ladder style. First let's consider this: simulate this circuit – Schematic created using CircuitLab We can calculate the current through R1 with Ohm's law: $$ {1\:\mathrm V \over 100\:\Omega} = 10\:\mathrm{mA} $$ We also know that the voltage across R1 is 1V. If we use ground as our reference, then how does 1V at the top of the resistor become 0V at the bottom of the resistor? If we could stick a probe somewhere in the middle of R1, we should measure a voltage somewhere between 1V and 0V, right? A resistor with a probe we can move around on it...sounds like a potentiometer, right? simulate this circuit By adjusting the knob on the potentiometer, we can measure any voltage between 0V and 1V. Now what if instead of a pot, we use two discrete resistors? simulate this circuit This is essentially the same thing, except we can't move the wiper on the potentiometer: it's stuck at a position 3/4th from the top. If we get 1V at the top, and 0V at the bottom, then 3/4ths of the way up we should expect to see 3/4ths of the voltage, or 0.75V. What we have made is a resistive voltage divider. It's behavior is formally described by the equation: $$ V_\text{out} = {R_2 \over R_1 + R_2} \cdot V_\text{in} $$ Now, what if we had a resistor with a resistance that changed with frequency? We could do some neat stuff. That's what capacitors are. At a low frequency (the lowest frequency being DC), a capacitor looks like a large resistor (infinite at DC). At higher frequencies, the capacitor looks like a smaller resistor. At infinite frequency, a capacitor has to resistance at all: it looks like a wire. So: simulate this circuit For high frequencies (top right), the capacitor looks like a small resistor. R3 is very much smaller than R2, so we will measure a very small voltage here. We could say that the input has been attenuated a lot. For low frequencies (lower right), the capacitor looks like a large resistor. R5 is very much bigger than R4, so here we will measure a very large voltage, almost all of the input voltage, that is, the input voltage has been attenuated very little. So high frequencies are attenuated, and low frequencies are not. Sounds like a low-pass filter. And if we exchange the places of the capacitor and the resistor, the effect is reversed, and we have a high-pass filter. However, capacitors aren't really resistors. What they are though, are impedances. The impedance of a capacitor is: $$ Z_\text{capacitor} = -j{1 \over 2 \pi f C} $$ Where: \$C\$ is the capacitance, in farads \$f\$ is the frequency, in hertz \$j\$ is the imaginary unit, \$\sqrt{-1}\$ Notice that, because \$f\$ is in the denominator, the impedance decreases as frequency increases. Impedances are complex numbers, because they contain \$j\$. If you know how arithmetic operations work on complex numbers, then you can still use the voltage divider equation, except we will use \$Z\$ instead of \$R\$ to suggest we are using impedances instead of simple resistances: $$ V_\text{out} = V_{in}{Z_2 \over Z_1 + Z_2}$$ And from this, you can calculate the behavior of any RC circuit, and a good deal more.
https://api.stackexchange.com
I know of, and have used f2py2e to wrap some old Fortran 77 code, but my understanding is that it does not work with newer Fortran 95 code. I've researched what I should use, and have come across fwrap and G3 f2py, neither of which seem to give any explanation of their current state, or how to use them (beyond basic usage). I have also seen that the version of f2py has the option to use the third generation f2py, but it is commented as being non-functional. Given this, I don't know which project I should use for a uni project. Which one should I use for new code? P.S. This is basically the same question as it was suggested that asking here might give better answers.
You can use the Python builtin ctypes module as described on fortran90.org. It is pretty straight forward and doesn't require any external dependencies. Also, the ndpointer arg type helper is very handy.
https://api.stackexchange.com
A common question, here and elsewhere. Is C++ suitable for embedded systems? Microcontrollers? RTOSes? Toasters? Embedded PCs? Is OOP useful on microcontrollers? Does C++ remove the programmer too far from the hardware to be efficient? Should Arduino's C++ (with no dynamic memory management, templates, exceptions) be considered as "real C++"? (Hopefully, this wiki will serve as a place to contain this potential holy war)
Yes, C++ is still useful in embedded systems. As everyone else has said, it still depends on the system itself, like an 8-bit uC would probably be a no-no in my book even though there is a compiler out there and some people do it (shudder). There's still an advantage to using C++ even when you scale it down to something like "C+" even in a 8-bit micro world. What do I mean by "C+"? I mean don't use new/delete, avoid exceptions, avoid virtual classes with inheritance, possibly avoid inheritance all together, be very careful with templates, use inline functions instead of macros, and use const variables instead of #defines. I've been working both in C and C++ in embedded systems for well over a decade now, and some of my youthful enthusiasm for C++ has definitely worn off due to some real world problems that shake one's naivete. I have seen the worst of C++ in an embedded systems which I would like to refer to as "CS programmers gone wild in an EE world." In fact, that is something I'm working on with my client to improve this one codebase they have among others. The danger of C++ is because it's a very very powerful tool much like a two-edged sword that can cut both your arm and leg off if not educated and disciplined properly in it's language and general programming itself. C is more like a single-edged sword, but still just as sharp. With C++ it's too easy to get very high-levels of abstraction and create obfuscated interfaces that become meaningless in the long-term, and that's partly due to C++ flexibility in solving the same problem with many different language features(templates, OOP, procedural, RTTI, OOP+templates, overloading, inlining). I finished a two 4-hour seminars on Embedded Software in C++ by the C++ guru, Scott Meyers. He pointed out some things about templates that I never considered before and how much more they can help creating safety-critical code. The jist of it is, you can't have dead code in software that has to meet stringent safety-critical code requirements. Templates can help you accomplish this, since the compiler only creates the code it needs when instantiating templates. However, one must become more thoroughly educated in their use to design correctly for this feature which is harder to accomplish in C because linkers don't always optimize dead code. He also demonstrated a feature of templates that could only be accomplished in C++ and would have kept the Mars Climate Observer from crashing had NASA implemented a similar system to protect units of measurement in the calculations. Scott Meyers is a very big proponent on templates and judicious use of inlining, and I must say I'm still skeptical on being gung ho about templates. I tend to shy away from them, even though he says they should only be applied where they become the best tool. He also makes the point that C++ gives you the tools to make really good interfaces that are easy to use right and make it hard to use wrong. Again, that's the hard part. One must come to a level of mastery in C++ before you can know how to apply these features in most efficient way to be the best design solution. The same goes for OOP. In the embedded world, you must familiarize yourself with what kind of code the compiler is going to spit out to know if you can handle the run-time costs of run-time polymorphism. You need to be willing to make measurements as well to prove your design is going to meet your deadline requirements. Is that new InterruptManager class going to make my interrupt latency too long? There are other forms of polymorphism that may fit your problem better such as link-time polymorphism which C can do as well, but C++ can do through the Pimpl design pattern (Opaque pointer). I say that all to say, that C++ has its place in the embedded world. You can hate it all you want, but it's not going away. It can be written in a very efficient manner, but it's harder to learn how to do it correctly than with C. It can sometimes work better than C at solving a problem and sometimes expressing a better interface, but again, you've got to educate yourself and not be afraid to learn how.
https://api.stackexchange.com
Fun with Math time. My mom gave me a roll of toilet paper to put it in the bathroom, and looking at it I immediately wondered about this: is it possible, through very simple math, to calculate (with small error) the total paper length of a toilet roll? Writing down some math, I came to this study, which I share with you because there are some questions I have in mind, and because as someone rightly said: for every problem there always are at least 3 solutions. I started by outlining the problem in a geometrical way, namely looking only at the essential: the roll from above, identifying the salient parameters: Parameters $r = $ radius of internal circle, namely the paper tube circle; $R = $ radius of the whole paper roll; $b = R - r = $ "partial" radius, namely the difference of two radii as stated. First Point I treated the whole problem in the discrete way. [See the end of this question for more details about what does it mean] Calculation In a discrete way, the problem asks for the total length of the rolled paper, so the easiest way is to treat the problem by thinking about the length as the sum of the whole circumferences starting by radius $r$ and ending with radius $R$. But how many circumferences are there? Here is one of the main points, and then I thought about introducing a new essential parameter, namely the thickness of a single sheet. Notice that it's important to have to do with measurable quantities. Calling $h$ the thickness of a single sheet, and knowing $b$ we can give an estimate of how many sheets $N$ are rolled: $$N = \frac{R - r}{h} = \frac{b}{h}$$ Having to compute a sum, the total length $L$ is then: $$L = 2\pi r + 2\pi (r + h) + 2\pi (r + 2h) + \cdots + 2\pi R$$ or better: $$L = 2\pi (r + 0h) + 2\pi (r + h) + 2\pi (r + 2h) + \cdots + 2\pi (r + Nh)$$ In which obviously $2\pi (r + 0h) = 2\pi r$ and $2\pi(r + Nh) = 2\pi R$. Writing it as a sum (and calculating it) we get: $$ \begin{align} L = \sum_{k = 0}^N\ 2\pi(r + kh) & = 2\pi r + 2\pi R + \sum_{k = 1}^{N-1}\ 2\pi(r + kh) \\\\ & = 2\pi r + 2\pi R + 2\pi \sum_{k = 1}^{N-1} r + 2\pi h \sum_{k = 1}^{N-1} k \\\\ & = 2\pi r + 2\pi R + 2\pi r(N-1) + 2\pi h\left(\frac{1}{2}N(N-1)\right) \\\\ & = 2\pi r N + 2\pi R + \pi hN^2 - \pi h N \end{align} $$ Using now: $N = \frac{b}{h}$; $R = b - a$ and $a = R - b$ (because $R$ is easily measurable), we arrive after little algebra to $$\boxed{L = 4\pi b + 2\pi R\left(\frac{b}{h} - 1\right) - \pi b\left(1 + \frac{b}{h}\right)}$$ Small Example: $h = 0.1$ mm; $R = 75$ mm; $b = 50$ mm thence $L = 157$ meters which might fit. Final Questions: 1) Could it be a good approximation? 2) What about the $\gamma$ factor? Namely the paper compression factor? 3) Could exist a similar calculation via integration over a spiral path? Because actually it's what it is: a spiral. Thank you so much for the time spent for this maybe tedious maybe boring maybe funny question!
The assumption that the layers are all cylindrical is a good first approximation. The assumption that the layers form a logarithmic spiral is not a good assumption at all, because it supposes that the thickness of the paper at any point is proportional to its distance from the center. This seems to me to be quite absurd. An alternative assumption is that the layers form an Archimedean spiral. This is slightly more realistic, since it says the paper has a uniform thickness from beginning to end. But this assumption is not a much more realistic than the assumption that all layers are cylindrical; in fact, in some ways it is less realistic. Here's how a sheet of thickness $h$ actually wraps around a cylinder. First, we glue one side of the sheet (near the end of the sheet) to the surface of the cylinder. Then we start rotating the cylinder. As the cylinder rotates, it pulls the outstretched sheet around itself. Near the end of the first full rotation of the cylinder, the wrapping looks like this: Notice that the sheet lies directly on the surface of the cylinder, that is, this part of the wrapped sheet is cylindrical. At some angle of rotation, the glued end of the sheet hits the part of the sheet that is being wrapped. The point where the sheet is tangent to the cylinder at that time is the last point of contact with the cylinder; the sheet goes straight from that point to the point of contact with the glued end, and then proceeds to wrap in a cylindrical shape around the first layer of the wrapped sheet, like this: As we continue rotating the cylinder, it takes up more and more layers of the sheet, each layer consisting of a cylindrical section going most of the way around the roll, followed by a flat section that joins this layer to the next layer. We end up with something like this: Notice that I cut the sheet just at the point where it was about to enter another straight section. I claim (without proof) that this produces a local maximum in the ratio of the length of the wrapped sheet of paper to the greatest thickness of paper around the inner cylinder. The next local maximum (I claim) will occur at the corresponding point of the next wrap of the sheet. The question now is what the thickness of each layer is. The inner surface of the cylindrical portion of each layer of the wrapped sheet has less area than the outer surface, but the portion of the original (unwrapped) sheet that was wound onto the roll to make this layer had equal area on both sides. So either the inner surface was somehow compressed, or the outer surface was stretched, or both. I think the most realistic assumption is that both compression and stretching occurred. In reality, I would guess that the inner surface is compressed more than the outer surface is stretched, but I do not know what the most likely ratio of compression to stretching would be. It is simpler to assume that the two effects are equal. The length of the sheet used to make any part of one layer of the roll is therefore equal to the length of the surface midway between the inner and outer surfaces of that layer. For example, to wrap the first layer halfway around the central cylinder of radius $r$, we use a length $\pi\left(r + \frac h2\right)$ of the sheet of paper. The reason this particularly simplifies our calculations is that the length of paper used in any part of the roll is simply the area of the cross-section of that part of the roll divided by the thickness of the paper. The entire roll has inner radius $r$ and outer radius $R = r + nh$, where $n$ is the maximum number of layers at any point around the central cylinder. (In the figure, $n = 5$.) The blue lines are sides of a right triangle whose vertices are the center of the inner cylinder and the points where the first layer last touches the inner cylinder and first touches its own end. This triangle has hypotenuse $r + h$ and one leg is $r$, so the other leg (which is the length of the straight portion of the sheet) is $$ \sqrt{(r + h)^2 - r^2} = \sqrt{(2r + h)h}.$$ Each straight portion of each layer is connected to the next layer of paper by wrapping around either the point of contact with the glued end of the sheet (the first time) or around the shape made by wrapping the previous layer around this part of the layer below; this forms a segment of a cylinder between the red lines with center at the point of contact with the glued end. The angle between the red lines is the same as the angle of the blue triangle at the center of the cylinder, namely $$ \alpha = \arccos \frac{r}{r+h}.$$ Now let's add up all parts of the roll. We have an almost-complete hollow cylinder with inner radius $r$ and outer radius $R$, missing only a segment of angle $\alpha$. The cross-sectional area of this is $$ A_1 = \left(\pi - \frac{\alpha}{2} \right) (R^2 - r^2).$$ We have a rectangular prism whose cross-sectional area is the product of two of its sides, $$ A_2 = (R - r - h) \sqrt{(2r + h)h}.$$ Finally, we have a segment of a cylinder of radius $R - r - h$ (between the red lines) whose cross-sectional area is $$ A_3 = \frac{\alpha}{2} (R - r - h)^2.$$ Adding this up and dividing by $h$, the total length of the sheet comes to \begin{align} L &= \frac1h (A_1+A_2+A_3)\\ &= \frac1h \left(\pi - \frac{\alpha}{2} \right) (R^2 - r^2) + \frac1h (R - r - h) \sqrt{(2r + h)h} + \frac{\alpha}{2h} (R - r - h)^2. \end{align} For $n$ layers on a roll, using the formula $R = r + nh$, we have $R - r = nh$, $R + r = 2r + nh$, $R^2 - r^2 = (R+r)(R-r) = (2r + nh)nh$, and $R - r - h = (n - 1)h$. The length then is \begin{align} L &= \left(\pi - \frac{\alpha}{2} \right) (2r + nh)n + (n - 1) \sqrt{(2r + h)h} + \frac{\alpha h}{2} (n - 1)^2\\ &= 2n\pi r + n^2\pi h + (n-1) \sqrt{(2r + h)h} - \left( n(r + h) - \frac h2 \right) \arccos \frac{r}{r+h}\\ &= n (R + r) \pi + (n-1) \sqrt{(2r + h)h} - \left( n(r + h) - \frac h2 \right) \arccos \frac{r}{r+h}. \end{align} One notable difference between this estimate and some others (including the original) is that I assume there can be at most $(R-r)/h$ layers of paper over any part of the central cylinder, not $1 + (R-r)/h$ layers. The total length is the number of layers times $2\pi$ times the average radius, $(R + r)/2$, adjusted by the amount that is missing in the section of the roll that is only $n - 1$ sheets thick. Things are not too much worse if we assume a different but uniform ratio of inner-compression to outer-stretching, provided that we keep the same paper thickness regardless of curvature; we just have to make an adjustment to the inner and outer radii of any cylindrical segment of the roll, which I think I'll leave as "an exercise for the reader." But this involves a change in volume of the sheet of paper. If we also keep the volume constant, we find that the sheet gets thicker or thinner depending on the ratio of stretch to compression and the curvature of the sheet. With constant volume, the length of paper in the main part of the roll (everywhere where we get the the full number of layers) is the same as in the estimate above, but the total length of the parts of the sheet that connect one layer to the next might change slightly. Update: Per request, here are the results of applying the formula above to the input values given as an example in the question: $h=0.1$, $R=75$, and $r=25$ (inferred from $R-r=b=50$), all measured in millimeters. Since $n = (R-r)/h$, we have $n = 500$. For a first approximation of the total length of paper, let's consider just the first term of the formula. This gives us $$ L_1 = n (R + r) \pi = 500 \cdot 100 \pi \approx 157079.63267949, $$ or about $157$ meters, the same as in the example in the question. The remaining two terms yield \begin{align} L - L_1 &= (n-1)\sqrt{(2r + h)h} - \left( n(r + h) - \frac h2 \right) \arccos\frac{r}{r+h} \\ &= 499\sqrt{50.1 \cdot 0.1} - (500(25.1) - 0.05)\arccos\frac{25}{25.1} \\ &\approx -3.72246774. \end{align} This is a very small correction, less than $2.4\times 10^{-5} L_1$. In reality (as opposed to my idealized model of constant-thickness constant-volume toilet paper), this "correction" is surely insignificant compared to the uncertainties of estimating the average thickness of the paper in each layer of a roll (not to mention any non-uniformity in how it is rolled by the manufacturing machinery). We can also compare $\lvert L - L_1 \rvert$ to the amount of paper that would be missing if the paper in the "flat" segment of the roll were instead $n - 1$ layers following the curve of the rest of the paper. The angle $\alpha$ is about $0.089294$ radians (about $5.1162$ degrees), so if the missing layer were the innermost layer, its length would be $25.05 \alpha \approx 2.24$, and if it were the outermost layer it would be $74.95 \alpha \approx 6.69$ (in millimeters). Just for amusement, I also tried expanding $L - L_1$ as a power series around $h = 0$ (with a little help from Wolfram Alpha). (To make $L - L_1$ a function of one variable $h$ with constants $R$ and $r$, make the substitution $n = (R - r)/h$.) This turns out to be a series of powers of $\sqrt h$ whose leading term is $$ -\frac{(R + 2r)\sqrt2}{3\sqrt r} \sqrt h. $$ Plugging in the values from the example, this evaluates to approximately $-3.7267799625$. If you really wanted the length of the idealized toilet roll to the nearest millimeter, but could tolerate an error of a few $\mu\mathrm m$ (for typical dimensions of a toilet roll), a suitable approximation would be $$ L \approx \frac{\pi (R^2 - r^2)}{h} - \frac{(R + 2r)\sqrt2}{3\sqrt r} \sqrt h. $$
https://api.stackexchange.com
Most of today's encryption, such as the RSA, relies on the integer factorization, which is not believed to be a NP-hard problem, but it belongs to BQP, which makes it vulnerable to quantum computers. I wonder, why has there not been an encryption algorithm which is based on an known NP-hard problem. It sounds (at least in theory) like it would make a better encryption algorithm than a one which is not proven to be NP-hard.
Worst-case Hardness of NP-complete problems is not sufficient for cryptography. Even if NP-complete problems are hard in the worst-case ($P \ne NP$), they still could be efficiently solvable in the average-case. Cryptography assumes the existence of average-case intractable problems in NP. Also, proving the existence of hard-on-average problems in NP using the $P \ne NP$ assumption is a major open problem. An excellent read is the classic by Russell Impagliazzo, A Personal View of Average-Case Complexity, 1995. An excellent survey is Average-Case Complexity by Bogdanov and Trevisan, Foundations and Trends in Theoretical Computer Science Vol. 2, No 1 (2006) 1–106
https://api.stackexchange.com
I'd like to learn the differences between 3 common formats such as FASTA, FASTQ and SAM. How they are different? Are there any benefits of using one over another? Based on Wikipedia pages, I can't tell the differences between them.
Let’s start with what they have in common: All three formats store sequence data, and sequence metadata. Furthermore, all three formats are text-based. However, beyond that all three formats are different and serve different purposes. Let’s start with the simplest format: FASTA FASTA stores a variable number of sequence records, and for each record it stores the sequence itself, and a sequence ID. Each record starts with a header line whose first character is >, followed by the sequence ID. The next lines of a record contain the actual sequence. The Wikipedia artice gives several examples for peptide sequences, but since FASTQ and SAM are used exclusively (?) for nucleotide sequences, here’s a nucleotide example: >Mus_musculus_tRNA-Ala-AGC-1-1 (chr13.trna34-AlaAGC) GGGGGTGTAGCTCAGTGGTAGAGCGCGTGCTTAGCATGCACGAGGcCCTGGGTTCGATCC CCAGCACCTCCA >Mus_musculus_tRNA-Ala-AGC-10-1 (chr13.trna457-AlaAGC) GGGGGATTAGCTCAAATGGTAGAGCGCTCGCTTAGCATGCAAGAGGtAGTGGGATCGATG CCCACATCCTCCA The ID can be in any arbitrary format, although several conventions exist. In the context of nucleotide sequences, FASTA is mostly used to store reference data; that is, data extracted from a curated database; the above is adapted from GtRNAdb (a database of tRNA sequences). FASTQ FASTQ was conceived to solve a specific problem arising during sequencing: Due to how different sequencing technologies work, the confidence in each base call (that is, the estimated probability of having correctly identified a given nucleotide) varies. This is expressed in the Phred quality score. FASTA had no standardised way of encoding this. By contrast, a FASTQ record contains a sequence of quality scores for each nucleotide. A FASTQ record has the following format: A line starting with @, containing the sequence ID. One or more lines that contain the sequence. A new line starting with the character +, and being either empty or repeating the sequence ID. One or more lines that contain the quality scores. Here’s an example of a FASTQ file with two records: @071112_SLXA-EAS1_s_7:5:1:817:345 GGGTGATGGCCGCTGCCGATGGCGTC AAATCCCACC + IIIIIIIIIIIIIIIIIIIIIIIIII IIII9IG9IC @071112_SLXA-EAS1_s_7:5:1:801:338 GTTCAGGGATACGACGTTTGTATTTTAAGAATCTGA + IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII6IBI FASTQ files are mostly used to store short-read data from high-throughput sequencing experiments. The sequence and quality scores are usually put into a single line each, and indeed many tools assume that each record in a FASTQ file is exactly four lines long, even though this isn’t guaranteed. As with FASTA, the format of the sequence ID isn’t standardised, but different producers of FASTQ use fixed notations that follow strict conventions. SAM SAM files are so complex that a complete description [PDF] takes 15 pages. So here’s the short version. The original purpose of SAM files is to store mapping information for sequences from high-throughput sequencing. As a consequence, a SAM record needs to store more than just the sequence and its quality, it also needs to store information about where and how a sequence maps into the reference. Unlike the previous formats, SAM is tab-based, and each record, consisting of either 11 or 12 fields, fills exactly one line. Here’s an example (tabs replaced by fixed-width spacing): r001 99 chr1 7 30 17M = 37 39 TTAGATAAAGGATACTG IIIIIIIIIIIIIIIII r002 0 chrX 9 30 3S6M1P1I4M * 0 0 AAAAGATAAGGATA IIIIIIIIII6IBI NM:i:1 For a description of the individual fields, refer to the documentation. The relevant bit is this: SAM can express exactly the same information as FASTQ, plus, as mentioned, the mapping information. However, SAM is also used to store read data without mapping information. In addition to sequence records, SAM files can also contain a header, which stores information about the reference that the sequences were mapped to, and the tool used to create the SAM file. Header information precede the sequence records, and consist of lines starting with @. SAM itself is almost never used as a storage format; instead, files are stored in BAM format, which is a compact, gzipped, binary representation of SAM. It stores the same information, just more efficiently. And, in conjunction with a search index, allows fast retrieval of individual records from the middle of the file (= fast random access). BAM files are also much more compact than compressed FASTQ or FASTA files. The above implies a hierarchy in what the formats can store: FASTA ⊂ FASTQ ⊂ SAM. In a typical high-throughput analysis workflow, you will encounter all three file types: FASTA to store the reference genome/transcriptome that the sequence fragments will be mapped to. FASTQ to store the sequence fragments before mapping. SAM/BAM to store the sequence fragments after mapping.
https://api.stackexchange.com
As someone who holds a BA in physics I was somewhat scandalized when I began working with molecular simulations. It was a bit of a shock to discover that even the most detailed and computationally expensive simulations can't quantitatively reproduce the full behavior of water from first principles. Previously, I had been under the impression that the basic laws of quantum mechanics were a solved problem (aside from gravity, which is usually assumed to be irrelevant at molecular scale). However, it seems that once you try to scale those laws up and apply them to anything larger or more complex than a hydrogen atom their predictive power begins to break down. From a mathematics point of view, I understand that the wave functions quickly grow too complicated to solve and that approximations (such as Born-Oppenheimer) are required to make the wave functions more tractable. I also understand that those approximations introduce errors which propagate further and further as the time and spatial scales of the system under study increase. What is the nature of the largest and most significant of these approximation errors? How can I gain an intuitive understanding of those errors? Most importantly, how can we move towards an ab-initio method that will allow us to accurately simulate whole molecules and populations of molecules? What are the biggest unsolved problems that are stopping people from developing these kinds of simulations?
As far as I'm aware, the most accurate methods for static calculations are Full Configuration Interaction with a fully relativistic four-component Dirac Hamiltonian and a "complete enough" basis set. I'm not an expert in this particular area, but from what I know of the method, solving it using a variational method (rather than a Monte-Carlo based method) scales shockingly badly, since I think the number of Slater determinants you have to include in your matrix scales something like $O(^{n_{orbs}}C_{n_e})$. (There's an article on the computational cost here.) The related Monte-Carlo methods and methods based off them using "walkers" and networks of determinants can give results more quickly, but as implied above, aren't variational. And are still hideously costly. Approximations currently in practical use just for energies for more than two atoms include: Born Oppenheimer, as you say: this is almost never a problem unless your system involves hydrogen atoms tunneling, or unless you're very near a state crossing/avoided crossing. (See, for example, conical intersections.) Conceptually, there are non-adiabatic methods for the wavefunction/density, including CPMD, and there's also Path-Integral MD which can account for nuclear tunneling effects. Nonrelativistic calculations, and two-component approximations to the Dirac equation: you can get an exact two-component formulation of the Dirac equation, but more practically the Zeroth-Order Regular Approximation (see Lenthe et al, JChemPhys, 1993) or the Douglas-Kroll-Hess Hamiltonian (see Reiher, ComputMolSci, 2012) are commonly used, and often (probably usually) neglecting spin-orbit coupling. Basis sets and LCAO: basis sets aren't perfect, but you can always make them more complete. DFT functionals, which tend to attempt to provide a good enough attempt at the exchange and correlation without the computational cost of the more advanced methods below. (And which come in a few different levels of approximation. LDA is the entry-level one, GGA, metaGGA and including exact exchange go further than that, and including the RPA is still a pretty expensive and new-ish technique as far as I'm aware. There are also functionals which use differing techniques as a function of separation, and some which use vorticity which I think have application in magnetic or aromaticity studies.) (B3LYP, the functional some people love and some people love to hate, is a GGA including a percentage of exact exchange.) Configuration Interaction truncations: CIS, CISD, CISDT, CISD(T), CASSCF, RASSCF, etc. These are all approximations to CI which assume the most important excited determinants are the least excited ones. Multi-reference Configuration Interaction (truncations): Ditto, but with a few different starting reference states. Coupled-Cluster method: I don't pretend to properly understand how this works, but it obtains similar results to Configuration Interaction truncations with the benefit of size-consistency (i.e. $E(H_2) \times 2 = E((H_2)_2$ (at large separation)). For dynamics, many of the approximations refer to things like the limited size of a tractable system, and practical timestep choice -- it's pretty standard stuff in the numerical time simulation field. There's also temperature maintenance (see Nose-Hoover or Langevin thermostats). This is mostly a set of statistical mechanics problems, though, as I understand it. Anyway, if you're physics-minded, you can get a pretty good feel for what's neglected by looking at the formulations and papers about these methods: most commonly used methods will have at least one or two papers that aren't the original specification explaining their formulation and what it includes. Or you can just talk to people who use them. (People who study periodic systems with DFT are always muttering about what different functionals do and don't include and account for.) Very few of the methods have specific surprising omissions or failure modes. The most difficult problem appears to be proper treatment of electron correlation, and anything above the Hartree-Fock method, which doesn't account for it at all, is an attempt to include it. As I understand it, getting to the accuracy of Full relativistic CI with complete basis sets is never going to be cheap without dramatically reinventing (or throwing away) the algorithms we currently use. (And for people saying that DFT is the solution to everything, I'm waiting for your pure density orbital-free formulations.) There's also the issue that the more accurate you make your simulation by including more contributions and more complex formulations, the harder it is to actually do anything with. For example, spin orbit coupling is sometimes avoided solely because it makes everything more complicated to analyse (but sometimes also because it has negligable effect), and the canonical Hartree-Fock or Kohn-Sham orbitals can be pretty useful for understanding qualitative features of a system without layering on the additional output of more advanced methods. (I hope some of this makes sense, it's probably a bit spotty. And I've probably missed someone's favourite approximation or niggle.)
https://api.stackexchange.com
Suppose I would like to insert data-cables of varying diameters -- e.g., a cable of 5 mm diameter -- into the 6 mm diameter hole of a plastic enclosure. The wires within the cable are terminated via soldering to a PCB inside the enclosure. What methods are used in the industry to ensure that pulling the cable won't make it slide in and out of the enclosure (thus preventing damage to the wire connections to the PCB inside)? Some options that I have considered: Two small lengths of thick heat shrink tubing placed around the cable, both just inside and just outside the wall of the enclosure. If the tubing is wide enough, then it will block the cable from sliding. This could work but may have to use too many layers of tubing and also the fit just by friction alone may not be strong enough. Apply a thick layer of rubber-compatible adhesive in a circle around the cable, both just inside and just outside the wall of the enclosure. The glue blob would act as sort of a bolt/washer. This is too messy in practice, and probably not usable professionally. Use rubber-and-steel-compatible adhesive to place two bolts around the cable, one just inside and one just outside the wall of the enclosure. The problem with this is that it is hard to find an adhesive that bonds well to both rubber and steel.
There are a few industry approaches to this. The first is molded cables. The cables themselves have strain reliefs molded to fit a given entry point, either by custom moulding or with off the shelf reliefs that are chemically welded/bonded to the cable. Not just glued, but welded together. The second is entry points designed to hold the cable. The cable is bent in a z or u shape around posts to hold it in place. The strength of the cable is used to prevent it from being pulled out. Similarly, but less often seen now in the days of cheap molding or diy kits, is this. The cable is screwed into a holder which is prevented from moving in OR out by the case and screw posts. Both of those options are a bit out of an individual's reach. The third is through the use of Cord Grips or Cable Glands, also known as grommets. Especially is a water tight fit is needed. They are screwed on, the cable past through, then the grip part is screwed. These prevent the cable from moving in or out, as well as sealing the hole. Most can accommodate cables at least 80% of the size of the opening. Any smaller and they basically won't do the job. Other options include cable fasteners or holders. These go around the cable and are screwed or bolted down (or use plastic press fits). These can be screwed into a pcb for example. Cable grommets are a fairly hacky way of doing it, as they are not designed to hold onto the cable. Instead they are designed to prevent the cable from being cut or damaged on a sharp or thin edge. But they can do in a pinch. As can tying a knot, though that mainly prevents pull outs, but might not be ideal for digital signals. Pushing a cable in doesn't happen too often, so you might not worry about that. Similar to the second method, is using two or three holes in a pcb to push a cable through (up, down, up), then pulling it tight. This moves the point of pressure away from the solder point and onto the cable+jacket. The other industry method is avoiding all this in the first place, by using panel mounted connectors (or board mounted connectors like Dell does for power plugs, yuck).
https://api.stackexchange.com
I have many alignments from Rfam Database, and I would like to edit them. I saw that many tools are used for Protein sequence alignments, but there is something specific to edit RNA alignments ? e.g. Stockholm Alignment of Pistol (just few entries). FP929053.1/1669026-1668956 AGUGGUCACAGCCACUAUAAACA-GGGCUU-UAAGCUGUG-AGCGUUGACCGUC----------ACAA-----CGGCGGUCAGGUAGUC AFOX01000025.1/1981-1912 ACUCGUCUGAGCGAGUAUAAACA-GGUCAU-UAAGCUCAG-AGCGUUCACCGGG----------AUCA------UUCGGUGAGGUUGGC HE577054.1/3246821-3246752 ACUCGUCUGAGCGAGUAUAAACA-GGUCAU-UAAGCUCAG-AGCGUUCACCGGG----------AUCA------UGCGGUGAGGUUGGC CP000154.1/3364237-3364168 GUUCGUCUGAGCGAACGCAAACA-GGCCAU-UAAGCUCAG-AGCGUUCACUGGA----------UUCG------UCCAGUGAGAUUGGC` `#=GC SS_cons <<<<__AAAAA_>>>>-------..<<<<-.----aaaaa.----<<<<<<<<<..........____....._>>>>>>>>>-->>>>` `#=GC RF acUCGUCuggGCGAguAUAAAuA..cgCaU.UAgGCccaG.AGCGUcccggcgg..........uUau.....uccgccgggGGUuGcg //
I would suggest use RALEE—RNALignment Editor in Emacs. It can get for you the consensus secondary structure, you can move left/right sequences and their secondary structures (you can't do it in JalView!), and more. It's an Emacs mode, so could be a bit hard to start off, but just try, you don't have to use all Emacs features to edit your alignments! The RALEE (RNA ALignment Editor in Emacs) tool provides a simple environment for RNA multiple sequence alignment editing, including structure-specific colour schemes, utilizing helper applications for structure prediction and many more conventional editing functions. Sam Griffiths-Jones Bioinformatics (2005) 21 (2): 257-259. Fig. You can move left/right sequences and their secondary structures (you can't do it in JalView!)
https://api.stackexchange.com
Many seem to believe that $P\ne NP$, but many also believe it to be very unlikely that this will ever be proven. Is there not some inconsistency to this? If you hold that such a proof is unlikely, then you should also believe that sound arguments for $P\ne NP$ are lacking. Or are there good arguments for $P\ne NP$ being unlikely, in a similar vein to say, the Riemann hypothesis holding for large numbers, or the very high lower bounds on the number of existing primes with a small distance apart viz. the Twin Prime conjecture?
People are skeptical because: No proof has come from an expert without having been rescinded shortly thereafter So much effort has been put into finding a proof, with no success, that it's assumed one will be either substantially complicated, or invent new mathematics for the proof The "proofs" that arise frequently fail to address hurdles which are known to exist. For example, many claim that 3SAT is not in P, while providing an argument that also applies to 2SAT. To be clear, the skepticism is of the proofs, not of the result itself.
https://api.stackexchange.com
I've always thought vaguely that the answer to the above question was affirmative along the following lines. Gödel's incompleteness theorem and the undecidability of the halting problem both being negative results about decidability and established by diagonal arguments (and in the 1930's), so they must somehow be two ways to view the same matters. And I thought that Turing used a universal Turing machine to show that the halting problem is unsolvable. (See also this math.SE question.) But now that (teaching a course in computability) I look closer into these matters, I am rather bewildered by what I find. So I would like some help with straightening out my thoughts. I realise that on one hand Gödel's diagonal argument is very subtle: it needs a lot of work to construct an arithmetic statement that can be interpreted as saying something about it's own derivability. On the other hand the proof of the undecidability of the halting problem I found here is extremely simple, and doesn't even explicitly mention Turing machines, let alone the existence of universal Turing machines. A practical question about universal Turing machines is whether it is of any importance that the alphabet of a universal Turing machine be the same as that of the Turing machines that it simulates. I thought that would be necessary in order to concoct a proper diagonal argument (having the machine simulate itself), but I haven't found any attention to this question in the bewildering collection of descriptions of universal machines that I found on the net. If not for the halting problem, are universal Turing machines useful in any diagonal argument? Finally I am confused by this further section of the same WP article, which says that a weaker form of Gödel's incompleteness follows from the halting problem: "a complete, consistent and sound axiomatisation of all statements about natural numbers is unachievable" where "sound" is supposed to be the weakening. I know a theory is consistent if one cannot derive a contradiction, and a complete theory about natural numbers would seem to mean that all true statements about natural numbers can be derived in it; I know Gödel says such a theory does not exist, but I fail to see how such a hypothetical beast could possibly fail to be sound, i.e., also derive statements which are false for the natural numbers: the negation of such a statement would be true, and therefore by completeness also derivable, which would contradict consistency. I would appreciate any clarification on one of these points.
I recommend you to check Scott Aaronson's blog post on a proof of the Incompleteness Theorem via Turing machines and Rosser's Theorem. His proof of the incompleteness theorem is extremely simple and easy to follow.
https://api.stackexchange.com
Is there a standard and accepted method for selecting the number of layers, and the number of nodes in each layer, in a feed-forward neural network? I'm interested in automated ways of building neural networks.
I realize this question has been answered, but I don't think the extant answer really engages the question beyond pointing to a link generally related to the question's subject matter. In particular, the link describes one technique for programmatic network configuration, but that is not a "[a] standard and accepted method" for network configuration. By following a small set of clear rules, one can programmatically set a competent network architecture (i.e., the number and type of neuronal layers and the number of neurons comprising each layer). Following this schema will give you a competent architecture but probably not an optimal one. But once this network is initialized, you can iteratively tune the configuration during training using a number of ancillary algorithms; one family of these works by pruning nodes based on (small) values of the weight vector after a certain number of training epochs--in other words, eliminating unnecessary/redundant nodes (more on this below). So every NN has three types of layers: input, hidden, and output. Creating the NN architecture, therefore, means coming up with values for the number of layers of each type and the number of nodes in each of these layers. The Input Layer Simple--every NN has exactly one of them--no exceptions that I'm aware of. With respect to the number of neurons comprising this layer, this parameter is completely and uniquely determined once you know the shape of your training data. Specifically, the number of neurons comprising that layer is equal to the number of features (columns) in your data. Some NN configurations add one additional node for a bias term. The Output Layer Like the Input layer, every NN has exactly one output layer. Determining its size (number of neurons) is simple; it is completely determined by the chosen model configuration. Is your NN going to run in Machine Mode or Regression Mode (the ML convention of using a term that is also used in statistics but assigning a different meaning to it is very confusing)? Machine mode: returns a class label (e.g., "Premium Account"/"Basic Account"). Regression Mode returns a value (e.g., price). If the NN is a regressor, then the output layer has a single node. If the NN is a classifier, then it also has a single node unless softmax is used in which case the output layer has one node per class label in your model. The Hidden Layers So those few rules set the number of layers and size (neurons/layer) for both the input and output layers. That leaves the hidden layers. How many hidden layers? Well, if your data is linearly separable (which you often know by the time you begin coding a NN), then you don't need any hidden layers at all. Of course, you don't need an NN to resolve your data either, but it will still do the job. Beyond that, as you probably know, there's a mountain of commentary on the question of hidden layer configuration in NNs (see the insanely thorough and insightful NN FAQ for an excellent summary of that commentary). One issue within this subject on which there is a consensus is the performance difference from adding additional hidden layers: the situations in which performance improves with a second (or third, etc.) hidden layer are very few. One hidden layer is sufficient for the large majority of problems. So what about the size of the hidden layer(s)--how many neurons? There are some empirically derived rules of thumb; of these, the most commonly relied on is 'the optimal size of the hidden layer is usually between the size of the input and size of the output layers'. Jeff Heaton, the author of Introduction to Neural Networks in Java, offers a few more. In sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) the number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers. Optimization of the Network Configuration Pruning describes a set of techniques to trim network size (by nodes, not layers) to improve computational performance and sometimes resolution performance. The gist of these techniques is removing nodes from the network during training by identifying those nodes which, if removed from the network, would not noticeably affect network performance (i.e., resolution of the data). (Even without using a formal pruning technique, you can get a rough idea of which nodes are not important by looking at your weight matrix after training; look at weights very close to zero--it's the nodes on either end of those weights that are often removed during pruning.) Obviously, if you use a pruning algorithm during training, then begin with a network configuration that is more likely to have excess (i.e., 'prunable') nodes--in other words, when deciding on network architecture, err on the side of more neurons, if you add a pruning step. Put another way, by applying a pruning algorithm to your network during training, you can approach optimal network configuration; whether you can do that in a single "up-front" (such as a genetic-algorithm-based algorithm), I don't know, though I do know that for now, this two-step optimization is more common.
https://api.stackexchange.com
I heard that the current limit for a USB port is 100mA. However, I also heard that some devices can get up to 1.8A from a port. How do you get past the 100mA limit?
I think I can attempt to clear this up. USB-100mA USB by default will deliver 100mA of current (it is 500mW power because we know it is 5v, right?) to a device. This is the most you can pull from a USB hub that does not have its own power supply, as they never offer more than 4 ports and keep a greedy 100mA for themselves. Some computers that are cheaply built will use an bus-powered hub(all of your USB connections share the same 500mA source and the electronics acting as a hub use that source also) internally to increase the number of USB ports and to save a small amount of money. This can be frustrating, but you can always be guaranteed 100mA. USB-500mA When a device is connected it goes through enumeration. This is not a trivial process and can be seen in detail on Jan Axelson's site. As you can see this is a long process, but a chip from a company like FTDI will handle the hard part for you. They discuss enumeration in one of their app notes. Near the end of enumeration you setup device parameters. Very specifically the configuration descriptors. If you look on this website they will show you all of the different pieces that can be set. It shows that you can get right up to 500mA of power requested. This is what you can expect from a computer. You can get FTDI chips to handle this for you, which is nice, as you only have to treat the chip as a serial line. USB-1.8A This is where things get interesting. You can purchase a charger that does outlet to USB at the store. This is a USB charging port. your computer does not supply these, and your device must be able to recognize it. First, to get the best information about USB, you sometimes have to bite the bullet and go to the people whom write the spec. I found great information about the USB charging spec here. The link on the page that is useful is the link for battery charging. This link seems to be tied to revision number, so I have linked both in case the revision is updated people can still access the information. Now, what does this mean. if you open up the batt_charging PDF and jump to chapter three they go into charging ports. Specifically 3.2.1 explains how this is gone about. Now they keep it very technical, but the key point is simple. A USB charging port places a termination resistance between D+ and D-. I would like to copy out the chapter that explains it, but it is a secured PDF and I cannot copy it out without retyping it. Summing it up You may pull 100mA from a computer port. You may pull 500mA after enumeration and setting the correct configuration. Computers vary their enforcement, as many others have said, but most I have had experience with will try to stop you. If you violate this, you may also damage a poorly design computer (Davr is spot on there, this is poor practice). You may pull up to 1.8A from a charging port, but this is a rare case where the port tells you something. You have to check for this and when it is verified you may do it. This is the same as buying a wall adapter, but you get to use a USB cable and USB port. Why use the charging spec? So that when my phone dies, my charger charges it quickly, but if I do not have my charger I may pull power from a computer, while using the same hardware port to communicate files and information with my computer. Please let me know if there is anything I can add.
https://api.stackexchange.com
This question: Can you get enough water by eating only fish? asks if a person could survive on fish alone. Can a person survive on fish and/ or blood alone of any species if stuck at sea or animal blood as a last resort where there is no water or fire? Obviously if it was a fresh water fish there is water, but there are fresh water mud skippers that can breathe air and the water to tainted to drink in that case a fresh water fish blood maybe safer than the water. Desalination would be the best way to process the blood but this is in emergency situation scenario. From @PTwr Comment's Link: If you drink blood regularly, over a long period of time the buildup of iron in your system can cause iron overload. This syndrome, which sometimes affects people who have repeated blood transfusions, is one of the few conditions for which the correct treatment is bloodletting.
Blood is not a good source of water. 1 liter of blood contains about 800 mL of water, 170 grams of protein and 2 grams of sodium (calculated from the composition of lamb blood). When metabolized, 170 grams of protein yields the amount of urea that requires 1,360 mL of water to be excreted in urine (calculated from here); 2 grams of sodium requires about 140 mL of water to be excreted (from here). This means that drinking 1 liter of blood, which contains 800 mL of water, will result in 1,500 mL of water loss through the kidneys, which will leave you with 700 mL of negative water balance. Fish blood can contain less protein, for example, trout (check Table 1) contains about 120 g of protein (plasma protein + hemoglobin) per liter of blood. Using the same calculation as above (1 g protein results in the excretion of 8 mL of urine), drinking of 1 liter of trout blood, which contains about 880 mL of water, will result in 960 mL of urine, so in 80 mL of negative water balance. Turtle blood can contain about 80 g of protein (plasma protein + hemoglobin) and 3.4 g of sodium per liter. Drinking 1 liter of turtle blood, which contains about 920 mL of water, will result in 80 x 8 mL = 640 mL loss of urine due to protein, and ~240 mL due to sodium, which is 880 mL of urine in total. This leaves you with 40 mL of positive water balance (to get 2 liters of water per day you would need to drink 50 liters of turtle blood, which isn't realistic. In various stories (The Atlantic, The Diplomat, The Telegraph), according to which people have survived by drinking turtle blood, they have also drunk rainwater, so we can't conclude it was turtle blood that helped them. I'm not aware of any story that would provide a convincing evidence that the blood of turtle or any other animal is hydrating.
https://api.stackexchange.com
Firstly, I am new to DSP and have no real education in it, but I am developing an audio visualization program and I am representing an FFT array as vertical bars as in a typical frequency spectrum visualization. The problem I had was that the audio signal values changed too rapidly to produce a pleasing visual output if I just mapped the FFT values directly: So I apply a simple function to the values in order to "smooth out" the result: // pseudo-code delta = fftValue - smoothedFftValue; smoothedFftValue += delta * 0.2; // 0.2 is arbitrary - the lower the number, the more "smoothing" In other words, I am taking the current value and comparing it to the last, and then adding a fraction of that delta to the last value. The result looks like this: So my question is: Is this a well-established pattern or function for which a term already exsits? Is so, what is the term? I use "smoothing" above but I am aware that this means something very specific in DSP and may not be correct. Other than that it seemed maybe related to a volume envelope, but also not quite the same thing. Are there better approaches or further study on solutions to this which I should look at? Thanks for your time and apologies if this is a stupid question (reading other discussions here, I am aware that my knowledge is much lower than the average it seems).
What you've implemented is a single-pole lowpass filter, sometimes called a leaky integrator. Your signal has the difference equation: $$ y[n] = 0.8 y[n-1] + 0.2 x[n] $$ where $x[n]$ is the input (the unsmoothed bin value) and $y[n]$ is the smoothed bin value. This is a common way of implementing a simple, low-complexity lowpass filter. I've written about them several times before in previous answers; see [1] [2] [3].
https://api.stackexchange.com
Can anyone state the difference between frequency response and impulse response in simple English?
The impulse response and frequency response are two attributes that are useful for characterizing linear time-invariant (LTI) systems. They provide two different ways of calculating what an LTI system's output will be for a given input signal. A continuous-time LTI system is usually illustrated like this: In general, the system $H$ maps its input signal $x(t)$ to a corresponding output signal $y(t)$. There are many types of LTI systems that can have apply very different transformations to the signals that pass through them. But, they all share two key characteristics: The system is linear, so it obeys the principle of superposition. Stated simply, if you linearly combine two signals and input them to the system, the output is the same linear combination of what the outputs would have been had the signals been passed through individually. That is, if $x_1(t)$ maps to an output of $y_1(t)$ and $x_2(t)$ maps to an output of $y_2(t)$, then for all values of $a_1$ and $a_2$, $$ H\{a_1 x_1(t) + a_2 x_2(t)\} = a_1 y_1(t) + a_2 y_2(t) $$ The system is time-invariant, so its characteristics do not change with time. If you add a delay to the input signal, then you simply add the same delay to the output. For an input signal $x(t)$ that maps to an output signal $y(t)$, then for all values of $\tau$, $$ H\{x(t - \tau)\} = y(t - \tau) $$ Discrete-time LTI systems have the same properties; the notation is different because of the discrete-versus-continuous difference, but they are a lot alike. These characteristics allow the operation of the system to be straightforwardly characterized using its impulse and frequency responses. They provide two perspectives on the system that can be used in different contexts. Impulse Response: The impulse that is referred to in the term impulse response is generally a short-duration time-domain signal. For continuous-time systems, this is the Dirac delta function $\delta(t)$, while for discrete-time systems, the Kronecker delta function $\delta[n]$ is typically used. A system's impulse response (often annotated as $h(t)$ for continuous-time systems or $h[n]$ for discrete-time systems) is defined as the output signal that results when an impulse is applied to the system input. Why is this useful? It allows us to predict what the system's output will look like in the time domain. Remember the linearity and time-invariance properties mentioned above? If we can decompose the system's input signal into a sum of a bunch of components, then the output is equal to the sum of the system outputs for each of those components. What if we could decompose our input signal into a sum of scaled and time-shifted impulses? Then, the output would be equal to the sum of copies of the impulse response, scaled and time-shifted in the same way. For discrete-time systems, this is possible, because you can write any signal $x[n]$ as a sum of scaled and time-shifted Kronecker delta functions: $$ x[n] = \sum_{k=0}^{\infty} x[k] \delta[n - k] $$ Each term in the sum is an impulse scaled by the value of $x[n]$ at that time instant. What would we get if we passed $x[n]$ through an LTI system to yield $y[n]$? Simple: each scaled and time-delayed impulse that we put in yields a scaled and time-delayed copy of the impulse response at the output. That is: $$ y[n] = \sum_{k=0}^{\infty} x[k] h[n-k] $$ where $h[n]$ is the system's impulse response. The above equation is the convolution theorem for discrete-time LTI systems. That is, for any signal $x[n]$ that is input to an LTI system, the system's output $y[n]$ is equal to the discrete convolution of the input signal and the system's impulse response. For continuous-time systems, the above straightforward decomposition isn't possible in a strict mathematical sense (the Dirac delta has zero width and infinite height), but at an engineering level, it's an approximate, intuitive way of looking at the problem. A similar convolution theorem holds for these systems: $$ y(t) = \int_{-\infty}^{\infty} x(\tau) h(t - \tau) d\tau $$ where, again, $h(t)$ is the system's impulse response. There are a number of ways of deriving this relationship (I think you could make a similar argument as above by claiming that Dirac delta functions at all time shifts make up an orthogonal basis for the $L^2$ Hilbert space, noting that you can use the delta function's sifting property to project any function in $L^2$ onto that basis, therefore allowing you to express system outputs in terms of the outputs associated with the basis (i.e. time-shifted impulse responses), but I'm not a licensed mathematician, so I'll leave that aside). One method that relies only upon the aforementioned LTI system properties is shown here. In summary: For both discrete- and continuous-time systems, the impulse response is useful because it allows us to calculate the output of these systems for any input signal; the output is simply the input signal convolved with the impulse response function. Frequency response: An LTI system's frequency response provides a similar function: it allows you to calculate the effect that a system will have on an input signal, except those effects are illustrated in the frequency domain. Recall the definition of the Fourier transform: $$ X(f) = \int_{-\infty}^{\infty} x(t) e^{-j 2 \pi ft} dt $$ More importantly for the sake of this illustration, look at its inverse: $$ x(t) = \int_{-\infty}^{\infty} X(f) e^{j 2 \pi ft} df $$ In essence, this relation tells us that any time-domain signal $x(t)$ can be broken up into a linear combination of many complex exponential functions at varying frequencies (there is an analogous relationship for discrete-time signals called the discrete-time Fourier transform; I only treat the continuous-time case below for simplicity). For a time-domain signal $x(t)$, the Fourier transform yields a corresponding function $X(f)$ that specifies, for each frequency $f$, the scaling factor to apply to the complex exponential at frequency $f$ in the aforementioned linear combination. These scaling factors are, in general, complex numbers. One way of looking at complex numbers is in amplitude/phase format, that is: $$ X(f) = A(f) e^{j \phi(f)} $$ Looking at it this way, then, $x(t)$ can be written as a linear combination of many complex exponential functions, each scaled in amplitude by the function $A(f)$ and shifted in phase by the function $\phi(f)$. This lines up well with the LTI system properties that we discussed previously; if we can decompose our input signal $x(t)$ into a linear combination of a bunch of complex exponential functions, then we can write the output of the system as the same linear combination of the system response to those complex exponential functions. Here's where it gets better: exponential functions are the eigenfunctions of linear time-invariant systems. The idea is, similar to eigenvectors in linear algebra, if you put an exponential function into an LTI system, you get the same exponential function out, scaled by a (generally complex) value. This has the effect of changing the amplitude and phase of the exponential function that you put in. This is immensely useful when combined with the Fourier-transform-based decomposition discussed above. As we said before, we can write any signal $x(t)$ as a linear combination of many complex exponential functions at varying frequencies. If we pass $x(t)$ into an LTI system, then (because those exponentials are eigenfunctions of the system), the output contains complex exponentials at the same frequencies, only scaled in amplitude and shifted in phase. These effects on the exponentials' amplitudes and phases, as a function of frequency, is the system's frequency response. That is, for an input signal with Fourier transform $X(f)$ passed into system $H$ to yield an output with a Fourier transform $Y(f)$, $$ Y(f) = H(f) X(f) = A(f) e^{j \phi(f)} X(f) $$ In summary: So, if we know a system's frequency response $H(f)$ and the Fourier transform of the signal that we put into it $X(f)$, then it is straightforward to calculate the Fourier transform of the system's output; it is merely the product of the frequency response and the input signal's transform. For each complex exponential frequency that is present in the spectrum $X(f)$, the system has the effect of scaling that exponential in amplitude by $A(f)$ and shifting the exponential in phase by $\phi(f)$ radians. Bringing them together: An LTI system's impulse response and frequency response are intimately related. The frequency response is simply the Fourier transform of the system's impulse response (to see why this relation holds, see the answers to this other question). So, for a continuous-time system: $$ H(f) = \int_{-\infty}^{\infty} h(t) e^{-j 2 \pi ft} dt $$ So, given either a system's impulse response or its frequency response, you can calculate the other. Either one is sufficient to fully characterize the behavior of the system; the impulse response is useful when operating in the time domain and the frequency response is useful when analyzing behavior in the frequency domain.
https://api.stackexchange.com
If $n>1$ is an integer, then $\sum \limits_{k=1}^n \frac1k$ is not an integer. If you know Bertrand's Postulate, then you know there must be a prime $p$ between $n/2$ and $n$, so $\frac 1p$ appears in the sum, but $\frac{1}{2p}$ does not. Aside from $\frac 1p$, every other term $\frac 1k$ has $k$ divisible only by primes smaller than $p$. We can combine all those terms to get $\sum_{k=1}^n\frac 1k = \frac 1p + \frac ab$, where $b$ is not divisible by $p$. If this were an integer, then (multiplying by $b$) $\frac bp +a$ would also be an integer, which it isn't since $b$ isn't divisible by $p$. Does anybody know an elementary proof of this which doesn't rely on Bertrand's Postulate? For a while, I was convinced I'd seen one, but now I'm starting to suspect whatever argument I saw was wrong.
Hint $ $ There is a $\rm\color{darkorange}{unique}$ denominator $\rm\,\color{#0a0} {2^K}$ having maximal power of $\:\!2,\,$ so scaling by $\rm\,\color{#c00}{2^{K-1}}$ we deduce a contradiction $\large \rm\, \frac{1}2 = \frac{c}d \,$ with odd $\rm\,d \:$ (vs. $\,\rm d = 2c),\,$ e.g. $$\begin{eqnarray} & &\rm\ \ \ \ \color{0a0}{m} &=&\ \ 1 &+& \frac{1}{2} &+& \frac{1}{3} &+&\, \color{#0a0}{\frac{1}{4}} &+& \frac{1}{5} &+& \frac{1}{6} &+& \frac{1}{7} \\ &\Rightarrow\ &\rm\ \ \color{#c00}{2}\:\!m &=&\ \ 2 &+&\ 1 &+& \frac{2}{3} &+&\, \color{#0a0}{\frac{1}{2}} &+& \frac{2}{5} &+& \frac{1}{3} &+& \frac{2}{7}^\phantom{M^M}\\ &\Rightarrow\ & -\color{#0a0}{\frac{1}{2}}\ \ &=&\ \ 2 &+&\ 1 &+& \frac{2}{3} &-&\rm \color{#c00}{2}\:\!m &+& \frac{2}{5} &+& \frac{1}{3} &+& \frac{2}{7}^\phantom{M^M} \end{eqnarray}$$ All denom's in the prior fractions are odd so they sum to fraction with odd denom $\rm\,d\, |\, 3\cdot 5\cdot 7$. Note $ $ Said $\rm\color{darkorange}{uniqueness}$ has easy proof: if $\rm\:j\:\! 2^K$ is in the interval $\rm\,[1,n]\,$ then so too is $\,\rm \color{#0a0}{2^K}\! \le\, j\:\!2^K.\,$ But if $\,\rm j\ge 2\,$ then the interval contains $\rm\,2^{K+1}\!= 2\cdot\! 2^K\! \le j\:\!2^K,\,$ contra maximality of $\,\rm K$. The argument is more naturally expressed using valuation theory, but I purposely avoided that because Anton requested an "elementary" solution. The above proof can easily be made comprehensible to a high-school student. Generally we can similarly prove that a sum of fractions is nonintegral if the highest power of a prime $\,p\,$ in any denominator occurs in $\rm\color{darkorange}{exactly\ one}$ denominator, e.g. see the Remark here where I explain how it occurs in a trickier multiplicative form (from a contest problem). In valuation theory, this is a special case of a basic result on the valuation of a sum (sometimes called the "dominance lemma" or similar). Another common application occurs when the sum of fractions arises from the evaluation of a polynomial, e.g. see here and its comment.
https://api.stackexchange.com
I am currently looking for a system which will allow me to version both the code and the data in my research. I think my way of analyzing data is not uncommon, and this will be useful for many people doing bioinformatics and aiming for the reproducibility. Here are the requrements: Analysis is performed on multiple machines (local, cluster, server). All the code is transparently synchronized between the machines. Source code versioning. Generated data versioning. Support for large number of small generated files (>10k). These also could be deleted. Support for large files (>1Gb). At some point old generated files can permanently deleted. It would be insane to have transparent synchronization of those, but being able to synchronize them on demand would be nice. So far I am using git + rsync/scp. But there are several downsides to it. Synchronization between multiple machines is a bit tedious, i.e. you have to git pull before you start working and git push after each update. I can live with that. You are not supposed to store large generated data files or large number of files inside your repository. Therefore I have to synchronize data files manually using rsync, which is error prone. There is something called git annex. It seems really close to what I need. But: A bit more work than git, but that's ok. Unfortunately it seems it does not work well with the large number of files. Often I have more that 10k small files in my analysis. There are some tricks to improve indexing, but it doesn't solve the issue. What I need is one symlink representing the full contents of directory. One potential solution is to use Dropbox or something similar (like syncthing) in combination with git. But the downside is there will be no connection between the source code version and the data version. Is there any versioning system for the code and the data meeting the requirements you can recommend?
There is a couple of points to consider here, which I outline below. The goal here should be to find a workflow that is minimally intrusive on top of already using git. As of yet, there is no ideal workflow that covers all use cases, but what I outline below is the closest I could come to it. Reproducibility is not just keeping all your data You have got your raw data that you start your project with. All other data in your project directory should never just "be there", but have some record of where it comes from. Data processing scripts are great for this, because they already document how you went from your raw to your analytical data, and then the files needed for your analyses. And those scripts can be versioned, with an appropriate single entry point of processing (e.g. a Makefile that describes how to run your scripts). This way, the state of all your project files is defined by the raw data, and the version of your processing scripts (and versions of external software, but that's a whole different kind of problem). What data/code should and should not be versioned Just as you would not version generated code files, you should not want to version 10k intermediary data files that you produced when performing your analyses. The data that should be versioned is your raw data (at the start of your pipeline), not automatically generated files. You might want to take snapshots of your project directory, but not keep every version of every file ever produced. This already cuts down your problem by a fair margin. Approach 1: Actual versioning of data For your raw or analytical data, Git LFS (and alternatively Git Annex, that you already mention) is designed to solve exactly this problem: add tracking information of files in your Git tree, but do not store the content of those files in the repository (because otherwise it would add the size of a non-diffable file with every change you make). For your intermediate files, you do the same as you would do with intermediate code files: add them to your .gitignore and do not version them. This begs a couple of considerations: Git LFS is a paid service from Github (the free tier is limited to 1 GB of storage/bandwidth per month, which is very little), and it is more expensive than other comparable cloud storage solutions. You could consider paying for the storage at Github or running your own LFS server (there is a reference implementation, but I assume this would still be a substantial effort) Git Annex is free, but it replaces files by links and hence changes time stamps, which is a problem for e.g. GNU Make based workflows (major drawback for me). Also, fetching of files needs to be done manually or via a commit hook Approach 2: Versioning code only, syncing data If your analytical data stays the same for most of your analyses, so the actual need to version it (as opposed to back up and document data provenance, which is essential) may be limited. The key to get this this working is to put all data files in your .gitignore and ignore all your code files in rsync, with a script in your project root (extensions and directories are an example only): #!/bin/bash cd $(dirname $0) rsync -auvr \ --exclude "*.r" \ --include "*.RData" \ --exclude "dir with huge files that you don't need locally" \ yourhost:/your/project/path/* . The advantage here is that you don't need to remember the rsync command you are running. The script itself goes into version control. This is especially useful if you do your heavy processing on a computing cluster but want to make plots from your result files on your local machine. I argue that you generally don't need bidirectional sync.
https://api.stackexchange.com
I'm using Python Keras package for neural network. This is the link. Is batch_size equals to number of test samples? From Wikipedia we have this information: However, in other cases, evaluating the sum-gradient may require expensive evaluations of the gradients from all summand functions. When the training set is enormous and no simple formulas exist, evaluating the sums of gradients becomes very expensive, because evaluating the gradient requires evaluating all the summand functions' gradients. To economize on the computational cost at every iteration, stochastic gradient descent samples a subset of summand functions at every step. This is very effective in the case of large-scale machine learning problems. Above information is describing test data? Is this same as batch_size in keras (Number of samples per gradient update)?
The batch size defines the number of samples that will be propagated through the network. For instance, let's say you have 1050 training samples and you want to set up a batch_size equal to 100. The algorithm takes the first 100 samples (from 1st to 100th) from the training dataset and trains the network. Next, it takes the second 100 samples (from 101st to 200th) and trains the network again. We can keep doing this procedure until we have propagated all samples through of the network. Problem might happen with the last set of samples. In our example, we've used 1050 which is not divisible by 100 without remainder. The simplest solution is just to get the final 50 samples and train the network. Advantages of using a batch size < number of all samples: It requires less memory. Since you train the network using fewer samples, the overall training procedure requires less memory. That's especially important if you are not able to fit the whole dataset in your machine's memory. Typically networks train faster with mini-batches. That's because we update the weights after each propagation. In our example we've propagated 11 batches (10 of them had 100 samples and 1 had 50 samples) and after each of them we've updated our network's parameters. If we used all samples during propagation we would make only 1 update for the network's parameter. Disadvantages of using a batch size < number of all samples: The smaller the batch the less accurate the estimate of the gradient will be. In the figure below, you can see that the direction of the mini-batch gradient (green color) fluctuates much more in comparison to the direction of the full batch gradient (blue color). Stochastic is just a mini-batch with batch_size equal to 1. In that case, the gradient changes its direction even more often than a mini-batch gradient.
https://api.stackexchange.com
What are some surprising equations/identities that you have seen, which you would not have expected? This could be complex numbers, trigonometric identities, combinatorial results, algebraic results, etc. I'd request to avoid 'standard' / well-known results like $ e^{i \pi} + 1 = 0$. Please write a single identity (or group of identities) in each answer. I found this list of Funny identities, in which there is some overlap.
This one by Ramanujan gives me the goosebumps: $$ \frac{2\sqrt{2}}{9801} \sum_{k=0}^\infty \frac{ (4k)! (1103+26390k) }{ (k!)^4 396^{4k} } = \frac1{\pi}. $$ P.S. Just to make this more intriguing, define the fundamental unit $U_{29} = \frac{5+\sqrt{29}}{2}$ and fundamental solutions to Pell equations, $$\big(U_{29}\big)^3=70+13\sqrt{29},\quad \text{thus}\;\;\color{blue}{70}^2-29\cdot\color{blue}{13}^2=-1$$ $$\big(U_{29}\big)^6=9801+1820\sqrt{29},\quad \text{thus}\;\;\color{blue}{9801}^2-29\cdot1820^2=1$$ $$2^6\left(\big(U_{29}\big)^6+\big(U_{29}\big)^{-6}\right)^2 =\color{blue}{396^4}$$ then we can see those integers all over the formula as, $$\frac{2 \sqrt 2}{\color{blue}{9801}} \sum_{k=0}^\infty \frac{(4k)!}{k!^4} \frac{29\cdot\color{blue}{70\cdot13}\,k+1103}{\color{blue}{(396^4)}^k} = \frac{1}{\pi} $$ Nice, eh?
https://api.stackexchange.com
Find a positive integer solution $(x,y,z,a,b)$ for which $$\frac{1}{x}+ \frac{1}{y} + \frac{1}{z} + \frac{1}{a} + \frac{1}{b} = 1\;.$$ Is your answer the only solution? If so, show why. I was surprised that a teacher would assign this kind of problem to a 5th grade child. (I'm a college student tutor) This girl goes to a private school in a wealthy neighborhood. Please avoid the trivial $x=y=z=a=b=5$. Try looking for a solution where $ x \neq y \neq z \neq a \neq b$ or if not, look for one where one variable equals to another, but explain your reasoning. The girl was covering "unit fractions" in her class.
The perfect number $28=1+2+4+7+14$ provides a solution: $$\frac1{28}+\frac1{14}+\frac17+\frac14+\frac12=\frac{1+2+4+7+14}{28}=1\;.$$ If they’ve been doing unit (or ‘Egyptian’) fractions, I’d expect some to see that since $\frac16+\frac13=\frac12$, $$\frac16+\frac16+\frac16+\frac16+\frac13=1$$ is a solution, though not a much more interesting one than the trivial solution. The choice of letters might well suggest the solution $$\frac16+\frac16+\frac16+\frac14+\frac14\;.$$ A little playing around would show that $\frac14+\frac15=\frac9{20}$, which differs from $\frac12$ by just $\frac1{20}$; that yields the solution $$\frac1{20}+\frac15+\frac14+\frac14+\frac14\;.$$ If I were the teacher, I’d hope that some kids would realize that since the average of the fractions is $\frac15$, in any non-trivial solution at least one denominator must be less than $5$, and at least one must be greater than $5$. Say that $x\le y\le z\le a\le b$. Clearly $x\ge 2$, so let’s try $x=2$. Then we need to solve $$\frac1y+\frac1z+\frac1a+\frac1b=\frac12\;.$$ Now $y\ge 3$. Suppose that $y=3$; then $$\frac1z+\frac1a+\frac1b=\frac16\;.$$ Now $1,2$, and $3$ all divide $36$, and $\frac16=\frac6{36}$, so we can write $$\frac1{36}+\frac1{18}+\frac1{12}=\frac{1+2+3}{36}=\frac6{36}=\frac16\;,$$ and we get another ‘nice’ solution, $$\frac12+\frac13+\frac1{12}+\frac1{18}+\frac1{36}\;.$$
https://api.stackexchange.com
As cited in an answer to this question, the ground state electronic configuration of niobium is: $\ce{Nb: [Kr] 5s^1 4d^4}$ Why is that so? What factors stabilize this configuration, compared to the obvious $\ce{5s^2 4d^3}$ (Aufbau principle), or the otherwise possible $\ce{5s^0 4d^5}$ (half-filled shell)?
There is an explanation to this that can be generalized, which dips a little into quantum chemistry, which is known as the idea of pairing energy. I'm sure you can look up the specifics, but basically in comparing the possible configurations of $\ce{Nb}$, we see the choice of either pairing electrons at a lower energy, or of separating them at higher energy, as seen below: d: ↿ ↿ ↿ _ _ ↿ ↿ ↿ ↿ _ ↿ ↿ ↿ ↿ ↿ ^ OR OR | s: ⥮ ↿ _ Energy gap (E) The top row is for the d-orbitals, which are higher in energy, and the bottom row is for the s-orbital, which is lower in energy. There is a quantifiable energy gap between the two as denoted on the side (unique for every element). As you may know, electrons like to get in the configuration that is lowest in energy. At first glance, that might suggest putting as many electrons in the s-orbital (lower energy) as possible, and then filling the rest in the d-orbital. This is known as the Aufbau principle and is widely taught in chemistry classes. It's not wrong, and works most of the time, but the story doesn't end there. There is a cost to pairing the electrons in the lower orbital, two costs actually, which I will define now: Repulsion energy: Pretty simple, the idea that e- repel, and having two of them in the same orbital will cost some energy. Normally counted as 1 C for every pair of electrons. Exchange energy: This is a little tricky, and probably the main reason this isn't taught until later in your chemistry education. Basically (due to quantum chemistry which I won't bore you with), there is a beneficial energy associated with having pairs of like energy, like spin electrons. Basically, for every pair of electrons at the same energy level (or same orbital shell in this case) and same spin (so, if you had 2 e- in the same orbital, no dice, since they have to be opposite spin), you accrue 1 K exchange energy, which is a stabilizing energy. (This is very simplified, but really "stabilizing energy" is nothing more than negative energy. I hope your thermodynamics is in good shape!) The thing with exchange (or K) energy is that you get one for every pair, so in the case: ↿ ↿ ↿ from say a p-subshell, you would get 3 K, for each pair, while from this example: ⥮ ↿ ↿ ↿ ↿ from a $\ce{d^6}$, you would get 10 K (for each unique pair, and none for the opposite spin e-) This K is quantifiable as well (and like the repulsion energy is unique for each atom). Thus, the combination of these two energies when compared to the band gap determines the state of the electron configuration. Using the example we started with: d: ↿ ↿ ↿ _ _ ↿ ↿ ↿ ↿ _ ↿ ↿ ↿ ↿ ↿ ^ s: ⥮ OR ↿ OR _ | PE: 3K + 1C 6K + 0C 10K + 0C Energy gap (E) You can see from the example that shoving 1 e- up from the s to the d-subshell results in a loss of 1C (losing positive or "destabilizing" repulsive energy) and gaining 3K (gaining negative or "stabilizing" exchange energy). Therefore, if the sum of these two is greater than the energy gap (i.e. 3K - 1C > E) then the electron will indeed be found in the d shell in $\ce{Nb}$'s ground state. Which is indeed the case for $\ce{Nb}$. Next, lets look at perhaps exciting the second s e- up to the d-subshell. We gain 4 additional K but don't lose any C, and we must again overcome the energy gap for this electron to be found in the d-subshell. It turns out that for $\ce{Nb}$: 4K + 0C < E (remember that C is considered a negative value, which we're not losing any of), so $\ce{Nb}$ is ultimately found in the $\ce{5s^1 4d^4}$ configuration.
https://api.stackexchange.com
Which software provides a good workflow from simple plotting of a few datapoints up to the creation of publication level graphics with detailed styles, mathematical typesetting and "professional quality"? This is a bit related to the question of David (What attributes make a figure professional quality?) but the focus is not on the attributes but on the software or general the workflow to get there. I have superficial experience with a number of programs, Gnuplot, Origin, Matplotlib, TikZ/PGFplot, Qtiplot but doing data analysis and nice figures at the same time seems rather hard to do. Is there some software that allows this or should I just dig deeper in one of the packages? Edit: My current workflow is a mix of different components, which more or less work together but in total it is not really efficient and I think this is typical for a number of scientists at an university lab. Typically it is a chain starting from the experiment to the publication like this: Get experimental data (usually in ASCII form, but with different layout, e.g. headers, comments, number of columns) Quick plot of the data to check whether nothing went wrong in Origin, Gnuplot or arcane plot program written 20 years ago. More detailed analysis of the data: subtracting background contributions, analysing dependencies and correlations, fitting with theoretical models. Many scientists use Origin for this task, some Matlab and Python/Scipy/Numpy usage is increasing. Creating professional figures, this involves adjusting to journal guidelines, mathematical typesetting and general editing. At the moment I use Origin for this but it has several drawbacks (just try to get a linewidth of exactly 0.5pt, it is not possible). For combining/polishing figures I mainly use Adobe Illustrator, as it can handle im-/export of PDF documents nicely but I would prefer not having to go through two steps for each diagram. I added an example of how it might look like in the end (as this has been created mostly by hand changing anything is painful and anything that provides an interface for example to set the linewidth for all elements would be nice):
If you have some experience with Python (or even not), I would recommend using the Python scientific software that is available (SciPy,Pandas),...) together with Matplotlib. Being a programming environment, you have full control over your data flows, data manipulations and plotting. You can also use the "full applications" Mayavi2 or Veusz.
https://api.stackexchange.com
It is fine to say that for an object flying past a massive object, the spacetime is curved by the massive object, and so the object flying past follows the curved path of the geodesic, so it "appears" to be experiencing gravitational acceleration. Do we also say along with it, that the object flying past in reality exeriences NO attraction force towards the massive object? Is it just following the spacetime geodesic curve while experiencing NO attractive force? Now come to the other issue: Supposing two objects are at rest relative to each other, ie they are not following any spacetime geodesic. Then why will they experience gravitational attraction towards each other? E.g. why will an apple fall to earth? Why won't it sit there in its original position high above the earth? How does the curvature of spacetime cause it to experience an attraction force towards the earth, and why would we need to exert a force in reverse direction to prevent it from falling? How does the curvature of spacetime cause this? When the apple was detatched from the branch of the tree, it was stationary, so it did not have to follow any geodesic curve. So we cannot just say that it fell to earth because its geodesic curve passed through the earth. Why did the spacetime curvature cause it to start moving in the first place?
To really understand this you should study the differential geometry of geodesics in curved spacetimes. I'll try to provide a simplified explanation. Even objects "at rest" (in a given reference frame) are actually moving through spacetime, because spacetime is not just space, but also time: apple is "getting older" - moving through time. The "velocity" through spacetime is called a four-velocity and it is always equal to the speed of light. Spacetime in gravitation field is curved, so the time axis (in simple terms) is no longer orthogonal to the space axes. The apple moving first only in the time direction (i.e. at rest in space) starts accelerating in space thanks to the curvature (the "mixing" of the space and time axes) - the velocity in time becomes velocity in space. The acceleration happens because the time flows slower when the gravitational potential is decreasing. Apple is moving deeper into the graviational field, thus its velocity in the "time direction" is changing (as time gets slower and slower). The four-velocity is conserved (always equal to the speed of light), so the object must accelerate in space. This acceleration has the direction of decreasing gravitational gradient. Edit - based on the comments I decided to clarify what the four-velocity is: 4-velocity is a four-vector, i.e. a vector with 4 components. The first component is the "speed through time" (how much of the coordinate time elapses per 1 unit of proper time). The remaining 3 components are the classical velocity vector (speed in the 3 spatial directions). $$ U=\left(c\frac{dt}{d\tau},\frac{dx}{d\tau},\frac{dy}{d\tau},\frac{dz}{d\tau}\right) $$ When you observe the apple in its rest frame (the apple is at rest - zero spatial velocity), the whole 4-velocity is in the "speed through time". It is because in the rest frame the coordinate time equals the proper time, so $\frac{dt}{d\tau} = 1$. When you observe the apple from some other reference frame, where the apple is moving at some speed, the coordinate time is no longer equal to the proper time. The time dilation causes that there is less proper time measured by the apple than the elapsed coordinate time (the time of the apple is slower than the time in the reference frame from which we are observing the apple). So in this frame, the "speed through time" of the apple is more than the speed of light ($\frac{dt}{d\tau} > 1$), but the speed through space is also increasing. The magnitude of the 4-velocity always equals c, because it is an invariant (it does not depend on the choice of the reference frame). It is defined as: $$ \left\|U\right\| =\sqrt[2]{c^2\left(\frac{dt}{d\tau}\right)^2-\left(\frac{dx}{d\tau}\right)^2-\left(\frac{dy}{d\tau}\right)^2-\left(\frac{dz}{d\tau}\right)^2} $$ Notice the minus signs in the expression - these come from the Minkowski metric. The components of the 4-velocity can change when you switch from one reference frame to another, but the magnitude stays unchanged (all the changes in components "cancel out" in the magnitude).
https://api.stackexchange.com
Suppose that I'm working on a scientific code in C++. In a recent discussion with a colleague, it was argued that expression templates could be a really bad thing, potentially making software compilable only on certain versions of gcc. Supposedly, this problem has affected a few scientific codes, as alluded to in the subtitles of this parody of Downfall. (These are the only examples I know of, hence the link.) However, other people have argued that expression templates are useful because they can yield performance gains, as in this paper in SIAM Journal of Scientific Computing, by avoiding storage of intermediate results in temporary variables. I don't know a whole lot about template metaprogramming in C++, but I do know that it is one approach used in automatic differentiation and in interval arithmetic, which is how I got into a discussion about expression templates. Given both the potential advantages in performance and the potential disadvantages in maintenance (if that's even the right word), when should I use C++ expression templates in computational science, and when should I avoid them?
My problem with expression templates is that they are a very leaky abstraction. You spend a lot of work writing very complicated code to do a simple task with nicer syntax. But if you want to change the algorithm, you have to mess with the dirty code and if you slip up with types or syntax, you get completely unintelligible error messages. If your application maps perfectly to a library based on expression templates, then it might be worth considering, but if you aren't sure, I would recommend just writing normal code. Sure, the high level code is less pretty, but you can just do what needs to be done. As a benefit, compilation time and binary sizes will go way down and you won't have to cope with huge variance in performance due to compiler and compilation flag choice.
https://api.stackexchange.com
The datasheet of the 24LC256 EEPROM states that: The SDA bus requires a pull-up resistor to VCC (typical 10 kΩ for 100 kHz, 2 kΩ for 400 kHz and 1 MHz). I thought that any resistor with a kΩ value would do the job (and it seems that my EEPROM works fine at different frequencies with a 10 kΩ resistor). My questions are: is there a correct value for pull-up resistors ? is there a law/rule to determine this value ? how do different resistance values affect the I²C data bus ?
The correct pullup resistance for the I2C bus depends on the total capacitance on the bus and the frequency you want to operate the bus at. The formula from the ATmega168 datasheet (which I believe comes from the official I2C spec) is -- $$\text{Freq}<100\text{kHz} \implies R_{\text{min}}=\frac{V_{cc}-0.4\text{V}}{3\text{mA}}, R_{\text{max}}=\frac{1000\text{ns}}{C_{\text{bus}}}$$ $$\text{Freq}>100\text{kHz} \implies R_{\text{min}}=\frac{V_{cc}-0.4\text{V}}{3\text{mA}}, R_{\text{max}}=\frac{300\text{ns}}{C_{\text{bus}}}$$ The Microchip 24LC256 specifies a maximum pin capacitance of 10pF (which is fairly typical). Count up the number of devices you have in parallel on the bus and use the formula above to calculate a range of values that will work. If you are powering off of batteries I would use values that are at the high end of the range. If there are no power limits on the power source or power dissipation issues in the ICs I would use values on the lower end of the range. I sell some kits with an I2C RTC (DS1337). I include 4K7 resistors in the kit which seems like a reasonable compromise for most users.
https://api.stackexchange.com
Given Newton's third law, why is there motion at all? Should not all forces even themselves out, so nothing moves at all? When I push a table using my finger, the table applies the same force onto my finger like my finger does on the table just with an opposing direction, nothing happens except that I feel the opposing force. But why can I push a box on a table by applying force ($F=ma$) on one side, obviously outbalancing the force the box has on my finger and at the same time outbalancing the friction the box has on the table? I obviously have the greater mass and acceleration as for example the matchbox on the table and thusly I can move it, but shouldn't the third law prevent that from even happening? Shouldn't the matchbox just accommodate to said force and applying the same force to me in opposing direction?
I think it's a great question, and enjoyed it very much when I grappled with it myself. Here's a picture of some of the forces in this scenario.$^\dagger$ The ones that are the same colour as each other are pairs of equal magnitude, opposite direction forces from Newton's third law. (W and R are of equal magnitude in opposite directions, but they're acting on the same object - that's Newton's first law in action.) While $F_\text{matchbox}$ does press back on my finger with an equal magnitude to $F_\text{finger}$, it's no match for $F_\text{muscles}$ (even though I've not been to the gym in years). At the matchbox, the forward force from my finger overcomes the friction force from the table. Each object has an imbalance of forces giving rise to acceleration leftwards. The point of the diagram is to make clear that the third law makes matched pairs of forces that act on different objects. Equilibrium from Newton's first or second law is about the resultant force at a single object. $\dagger$ (Sorry that the finger doesn't actually touch the matchbox in the diagram. If it had, I wouldn't have had space for the important safety notice on the matches. I wouldn't want any children to be harmed because of a misplaced force arrow. Come to think of it, the dagger on this footnote looks a bit sharp.)
https://api.stackexchange.com
$$\sum_{n=1}^\infty\frac1{n^s}$$ only converges to $\zeta(s)$ if $\text{Re}(s)>1$. Why should analytically continuing to $\zeta(-1)$ give the right answer?
there are many ways to see that your result is the right one. What does the right one mean? It means that whenever such a sum appears anywhere in physics - I explicitly emphasize that not just in string theory, also in experimentally doable measurements of the Casimir force (between parallel metals resulting from quantized standing electromagnetic waves in between) - and one knows that the result is finite, the only possible finite part of the result that may be consistent with other symmetries of the problem (and that is actually confirmed experimentally whenever it is possible) is equal to $-1/12$. It's another widespread misconception (see all the incorrect comments right below your question) that the zeta-function regularization is the only way how to calculate the proper value. Let me show a completely different calculation - one that is a homework exercise in Joe Polchinski's "String Theory" textbook. Exponential regulator method Add an exponentially decreasing regulator to make the sum convergent - so that the sum becomes $$ S = \sum_{n=1}^{\infty} n e^{-\epsilon n} $$ Note that this is not equivalent to generalizing the sum to the zeta-function. In the zeta-function, the $n$ is the base that is exponentiated to the $s$th power. Here, the regulator has $n$ in the exponent. Obviously, the original sum of natural numbers is obtained in the $\epsilon\to 0$ limit of the formula for $S$. In physics, $\epsilon$ would be viewed as a kind of "minimum distance" that can be resolved. The sum above may be exactly evaluated and the result is (use Mathematica if you don't want to do it yourself, but you can do it yourself) $$ S = \frac{e^\epsilon}{(e^\epsilon-1)^2} $$ We will only need some Laurent expansion around $\epsilon = 0$. $$ S = \frac{1+\epsilon+\epsilon^2/2 + O(\epsilon^3)}{(\epsilon+\epsilon^2/2+\epsilon^3/6+O(\epsilon^4))^2} $$ We have $$ S = \frac{1}{\epsilon^2} \frac{1+\epsilon+\epsilon^2/2+O(\epsilon^3)}{(1+\epsilon/2+\epsilon^2/6+O(\epsilon^3))^2} $$ You see that the $1/\epsilon^2$ leading divergence survives and the next subleading term cancels. The resulting expansion may be calculated with this Mathematica command 1/epsilon^2 * Series[epsilon^2 Sum[n Exp[-n epsilon], {n, 1, Infinity}], {epsilon, 0, 5}] and the result is $$ \frac{1}{\epsilon^2} - \frac{1}{12} + \frac{\epsilon^2}{240} + O(\epsilon^4) $$ In the $\epsilon\to 0$ limit we were interested in, the $\epsilon^2/240$ term as well as the smaller ones go to zero and may be erased. The leading divergence $1/\epsilon^2$ may be and must be canceled by a local counterterm - a vacuum energy term. This is true for the Casimir effect in electromagnetism (in this case, the cancelled pole may be interpreted as the sum of the zero-point energies in the case that no metals were bounding the region), zero-point energies in string theory, and everywhere else. The cancellation of the leading divergence is needed for physics to be finite - but one may guarantee that the counterterm won't affect the finite term, $-1/12$, which is the correct result of the sum. In physics applications, $\epsilon$ would be dimensionful and its different powers are sharply separated and may be treated individually. That's why the local counterterms may eliminate the leading divergence but don't affect the finite part. That's also why you couldn't have used a more complex regulator, like $\exp(-(\epsilon+\epsilon^2)n)$. There are many other, apparently inequivalent ways to compute the right value of the sum. It is not just the zeta function. Euler's method Let me present one more, slightly less modern, method that was used by Leonhard Euler to calculate that the sum of natural numbers is $-1/12$. It's of course a bit more heuristic but his heuristic approach showed that he had a good intuition and the derivation could be turned into a modern physics derivation, too. We will work with two sums, $$ S = 1+2+3+4+5+\dots, \quad T = 1-2+3-4+5-\dots $$ Extrapolating the geometric and similar sums to the divergent (and, in this case, marginally divergent) domain of values of $x$, the expression $T$ may be summed according to the Taylor expansion $$ \frac{1}{(1+x)^2} = 1 - 2x + 3x^2 -4x^3 + \dots $$ Substitute $x=1$ to see that $T=+1/4$. The value of $S$ is easily calculated now: $$ T = (1+2+3+\dots) - 2\times (2+4+6+\dots) = (1+2+3+\dots) (1 - 4) = -3S$$ so $S=-T/3=-1/12$. A zeta-function calculation A somewhat unusual calculation of $\zeta(-1)=-1/12$ of mine may be found in the Pictures of Yellows Roses, a Czech student journal. The website no longer works, although a working snapshot of the original website is still available through the WebArchive (see this link). A 2014 English text with the same evaluation at the end can be found at The Reference Frame. The comments were in Czech but the equations represent bulk of the language that really matters, so the Czech comments shouldn't be a problem. A new argument (subscript) $s$ is added to the zeta function. The new function is the old zeta function for $s=0$ and for $s=1$, it only differs by one. We Taylor expand around $s=0$ to get to $s=1$ and we find out that only a finite number of terms survives if the main argument $x$ is a non-positive integer. The resulting recursive relations for the zeta function allow us to compute the values of the zeta-function at integers smaller than $1$, and prove that the function vanishes at negative even values of $x$.
https://api.stackexchange.com
Adaptive thresholding has been discussed in a few questions earlier: Adaptive Thresholding for liver segmentation using Matlab What are the best algorithms for document image thresholding in this example? Of course, there are many algorithms for Adaptive thresholding. I want to know which ones you have found most effective and useful. Which Adaptive algorithms you have used the most and for which application; how do you come to choose this algorithm?
I do not think mine will be a complete answer, but I'll offer what I know and since this is a community edited site, I hope somebody will give a complimentary answer soon :) Adaptive thresholding methods are those that do not use the same threshold throughout the whole image. But, for some simpler usages, it is sometimes enough to just pick a threshold with a method smarter than the most simple iterative method. Otsu's method is a popular thresholding method that assumes the image contains two classes of pixels - foreground and background, and has a bi-modal histogram. It then attempts to minimize their combined spread (intra-class variance). The simplest algorithms that can be considered truly adaptive thresholding methods would be the ones that split the image into a grid of cells and then apply a simple thresholding method (e.g. iterative or Otsu's method) on each cell treating it as a separate image (and presuming a bi-modal histogram). If a sub-image can not be thresholded good the threshold from one of the neighboring cells can be used. Alternative approach to finding the local threshold is to statistically examine the intensity values of the local neighborhood of each pixel. The threshold is different for each pixel and calculated from it's local neighborhood (a median, average, and other choices are possible). There is an implementation of this kind of methods included in OpenCV library in the cv::adaptiveThresholding function. I found another similar method called Bradley Local Thresholding. It also examines the neighborhood of each pixel, setting the brightnes to black if the pixels brightness is t percent lower than the average brightness of surrounding pixels. The corresponding paper can be found here. This stackoverflow answer mentiones a local (adaptive) thresholding method called Niblack but I have not heard of it before. Lastly, there is a method I have used in one of my previous smaller projects, called Image Thresholding by Variational Minimax Optimization. It is an iterative method, based on optimizing an energy function that is a nonlinear combination of two components. One component aims to calculate the threshold based on the position of strongest intensity changes in the image. The other component aims to smooth the threshold at the (object)border areas. It has proven fairly good on images of analog instruments (various shading and reflection from glass/plastic present), but required a careful choice of the number of iterations. Late edit: Inspired by the comment to this answer. There is one more way I know of to work around uneven lighting conditions. I will write here about bright objects on a dark background, but the same reasoning can be applied if the situation is reverse. Threshold the white top-hat transform of the image with a constant threshold instead of the original image. A white top hat of an image is nothing but a difference between the image $f$ and it's opening $\gamma(f)$. As further explanation let me offer a quote from P. Soille: Morphological Image Analysis: An opening of the original image with a large square SE removes all relevant image structures but preserves the illumination function. The white top-hat of the original image or subtraction of the illumination function from the original image outputs an image with a homogeneous illumination.
https://api.stackexchange.com
In statistics and its various applications, we often calculate the covariance matrix, which is positive definite (in the cases considered) and symmetric, for various uses. Sometimes, we need the inverse of this matrix for various computations (quadratic forms with this inverse as the (only) center matrix, for example). Given the qualities of this matrix, and the intended uses, I wonder: What is the best, in terms of numerical stability, way to go about computing or using (let's say for quadratic forms or matrix-vector multiplication in general) this inverse? Some factorization that can come in handy?
A Cholesky factorization makes the most sense for the best stability and speed when you are working with a covariance matrix, since the covariance matrix will be positive semi-definite symmetric matrix. Cholesky is a natural here. BUT... IF you intend to compute a Cholesky factorization, before you ever compute the covariance matrix, do yourself a favor. Make the problem maximally stable by computing a QR factorization of your matrix. (A QR is fast too.) That is, if you would compute the covariance matrix as $$ C = A^{T} A $$ where $A$ has had the column means removed, then see that when you form $C$, it squares the condition number. So better is to form the QR factors of $A$ rather than explicitly computing a Cholesky factorization of $A^{T}A$. $$ A = QR $$ Since Q is orthogonal, $$ \begin{align} C &= (QR)^{T} QR \\ &= R^T Q^T QR \\ &= R^T I R \\ &= R^{T} R \end{align} $$ Thus we get the Cholesky factor directly from the QR factorization, in the form of $R^{T}$. If a $Q$-less QR factorization is available, this is even better since you don't need $Q$. A $Q$-less QR is a fast thing to compute, since $Q$ is never generated. It becomes merely a sequence of Householder transformations. (A column pivoted, $Q$-less QR would logically be even more stable, at the cost of some extra work to choose the pivots.) The great virtue of using the QR here is it is highly numerically stable on nasty problems. Again, this is because we never had to form the covariance matrix directly to compute the Cholesky factor. As soon as you form the product $A^{T}A$, you square the condition number of the matrix. Effectively, you lose information down in the parts of that matrix where you originally had very little information to start with. Finally, as another response points out, you don't even need to compute and store the inverse at all, but use it implicitly in the form of backsolves on triangular systems.
https://api.stackexchange.com
In speech recognition, the front end generally does signal processing to allow feature extraction from the audio stream. A discrete Fourier transform (DFT) is applied twice in this process. The first time is after windowing; after this Mel binning is applied and then another Fourier transform. I've noticed however, that it is common in speech recognizers (the default front end in CMU Sphinx, for example) to use a discrete cosine transform (DCT) instead of a DFT for the second operation. What is the difference between these two operations? Why would you do DFT the first time and then a DCT the second time?
The Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) perform similar functions: they both decompose a finite-length discrete-time vector into a sum of scaled-and-shifted basis functions. The difference between the two is the type of basis function used by each transform; the DFT uses a set of harmonically-related complex exponential functions, while the DCT uses only (real-valued) cosine functions. The DFT is widely used for general spectral analysis applications that find their way into a range of fields. It is also used as a building block for techniques that take advantage of properties of signals' frequency-domain representation, such as the overlap-save and overlap-add fast convolution algorithms. The DCT is frequently used in lossy data compression applications, such as the JPEG image format. The property of the DCT that makes it quite suitable for compression is its high degree of "spectral compaction;" at a qualitative level, a signal's DCT representation tends to have more of its energy concentrated in a small number of coefficients when compared to other transforms like the DFT. This is desirable for a compression algorithm; if you can approximately represent the original (time- or spatial-domain) signal using a relatively small set of DCT coefficients, then you can reduce your data storage requirement by only storing the DCT outputs that contain significant amounts of energy.
https://api.stackexchange.com
There are many tutorials that use a pull-up or pull-down resistor in conjunction with a switch to avoid a floating ground, e.g. Many of these projects use a 10K resistor, merely remarking that it is a good value. Given a particular circuit, how do I determine the appropriate value for a pull-down resistor? Can it be calculated, or is it best determined by experimentation?
Use 10 kΩ, it's a good value. For more detail, we have to look at what a pullup does. Let's say you have a pushbutton you want to read with a microcontroller. The pushbutton is a momentary SPST (Single Pole Single Throw) switch. It has two connection points which are either connected or not. When the button is pressed, the two points are connected (switch is closed). When released, they are not connected (switch is open). Microcontrollers don't inherently detect connection or disconnection. What they do sense is a voltage. Since this switch has only two states it makes sense to use a digital input, which is after all designed to be only in one of two states. The micro can sense which state a digital input is in directly. A pullup helps convert the open/closed connection of the switch to a low or high voltage the microcontroller can sense. One side of the switch is connected to ground and the other to the digital input. When the switch is pressed, the line is forced low because the switch essentially shorts it to ground. However, when the switch is released, nothing is driving the line to any particular voltage. It could just stay low, pick up other nearby signals by capacitive coupling, or eventually float to a specific voltage due to the tiny bit of leakage current thru the digital input. The job of the pullup resistor is to provide a positive guaranteed high level when the switch is open, but still allow the switch to safely short the line to ground when closed. There are two main competing requirements on the size of the pullup resistor. It has to be low enough to solidly pull the line high, but high enough to not cause too much current to flow when the switch is closed. Both those are obviosly subjective and their relative importance depends on the situation. In general, you make the pullup just low enough to make sure the line is high when the switch is open, given all the things that might make the line low otherwise. Let's look at what it takes to pull up the line. Looking only at the DC requirement uncovers the leakage current of the digital input line. The ideal digital input has infinite impedance. Real ones don't, of course, and the extent they are not ideal is usually expressed as a maximum leakage current that can either come out of or go into the pin. Let's say your micro is specified for 1 µA maximum leakage on its digital input pins. Since the pullup has to keep the line high, the worst case is assuming the pin looks like a 1 µA current sink to ground. If you were to use a 1 MΩ pullup, for example, then that 1 µA would cause 1 Volt accross the 1 MΩ resistor. Let's say this is a 5V system, so that means the pin is only guaranteed to be up to 4V. Now you have to look at the digital input spec and see what the minimum voltage requirement is for a logic high level. That can be 80% of Vdd for some micros, which would be 4V in this case. Therefore a 1 MΩ pullup is right at the margin. You need at least a little less than that for guaranteed correct behaviour due to DC considerations. However, there are other considerations, and these are harder to quantify. Every node has some capacitive coupling to all other nodes, although the magnitude of the coupling falls off with distance such that only nearby nodes are relevant. If these other nodes have signals on them, these signals could couple onto your digital input. A lower value pullup makes the line lower impedance, which reduces the amount of stray signal it will pick up. It also gives you a higher minimum guaranteed DC level against the leakage current, so there is more room between that DC level and where the digital input might interpret the result as a logic low instead of the intended logic high. So how much is enough? Clearly the 1 MΩ pullup in this example is not enough (too high a resistance). It's nearly impossible to guess coupling to nearby signals, but I'd want at least a order of magnitude margin over the minimum DC case. That means I want a 100 kΩ pullup or lower at least, although if there is much noise around I'd want it to be lower. There is another consideration driving the pullup lower, and that is rise time. The line will have some stray capacitance to ground, so will exponentially decay towards the supply value instead of instantly going there. Let's say all the stray capacitance adds up to 20 pF. That times the 100 kΩ pullup is 2 µs. It takes 3 time constants to get to 95% of the settling value, or 6 µs in this case. That is of no consequence in human time so doesn't matter in this example, but if this were a digital bus line you wanted to run at 200 kHz data rate it wouldn't work. Now lets look at the other competing consideration, which is the current wasted when the switch is pressed. If this unit is running off of line power or otherwise handling substantial power, a few mA won't matter. At 5V it takes 5 kΩ to draw 1 mA. That's actually "a lot" of current in some cases, and well more than required due to the other considerations. If this is a battery powered device and the switch could be on for a substantial fraction of the time, then every µA may matter and you have to think about this very carefully. In some cases you might sample the switch periodically and only turn on the pullup for a short time around the sample to minimize current draw. Other than special considerations like battery operation, 100 kΩ is high enough impedance to make me nervous about picking up noise. 1 mA of current wasted when the switch is on seems unnecessarily large. So 500 µA, which means 10 kΩ impedance is about right. Like I said, use 10 kΩ. It's a good value.
https://api.stackexchange.com
I've recently heard a riddle, which looks quite simple, but I can't solve it. A girl thinks of a number which is 1, 2, or 3, and a boy then gets to ask just one question about the number. The girl can only answer "Yes", "No", or "I don't know," and after the girl answers it, he knows what the number is. What is the question? Note that the girl is professional in maths and knows EVERYTHING about these three numbers. EDIT: The person who told me this just said the correct answer is: "I'm also thinking of a number. It's either 1 or 2. Is my number less than yours?"
"I am thinking of a number which is either 0 or 1. Is the sum of our numbers greater than 2?"
https://api.stackexchange.com
It seems that through various related questions here, there is consensus that the "95%" part of what we call a "95% confidence interval" refers to the fact that if we were to exactly replicate our sampling and CI-computation procedures many times, 95% of thusly computed CIs would contain the population mean. It also seems to be the consensus that this definition does not permit one to conclude from a single 95%CI that there is a 95% chance that the mean falls somewhere within the CI. However, I don't understand how the former doesn't imply the latter insofar as, having imagined many CIs 95% of which contain the population mean, shouldn't our uncertainty (with regards to whether our actually-computed CI contains the population mean or not) force us to use the base-rate of the imagined cases (95%) as our estimate of the probability that our actual case contains the CI? I've seen posts argue along the lines of "the actually-computed CI either contains the population mean or it doesn't, so its probability is either 1 or 0", but this seems to imply a strange definition of probability that is dependent on unknown states (i.e. a friend flips fair coin, hides the result, and I am disallowed from saying there is a 50% chance that it's heads). Surely I'm wrong, but I don't see where my logic has gone awry...
Part of the issue is that the frequentist definition of a probability doesn't allow a nontrivial probability to be applied to the outcome of a particular experiment, but only to some fictitious population of experiments from which this particular experiment can be considered a sample. The definition of a CI is confusing as it is a statement about this (usually) fictitious population of experiments, rather than about the particular data collected in the instance at hand. So part of the issue is one of the definition of a probability: The idea of the true value lying within a particular interval with probability 95% is inconsistent with a frequentist framework. Another aspect of the issue is that the calculation of the frequentist confidence doesn't use all of the information contained in the particular sample relevant to bounding the true value of the statistic. My question "Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals" discusses a paper by Edwin Jaynes which has some really good examples that really highlight the difference between confidence intervals and credible intervals. One that is particularly relevant to this discussion is Example 5, which discusses the difference between a credible and a confidence interval for estimating the parameter of a truncated exponential distribution (for a problem in industrial quality control). In the example he gives, there is enough information in the sample to be certain that the true value of the parameter lies nowhere in a properly constructed 90% confidence interval! This may seem shocking to some, but the reason for this result is that confidence intervals and credible intervals are answers to two different questions, from two different interpretations of probability. The confidence interval is the answer to the request: "Give me an interval that will bracket the true value of the parameter in $100p$% of the instances of an experiment that is repeated a large number of times." The credible interval is an answer to the request: "Give me an interval that brackets the true value with probability $p$ given the particular sample I've actually observed." To be able to answer the latter request, we must first adopt either (a) a new concept of the data generating process or (b) a different concept of the definition of probability itself. The main reason that any particular 95% confidence interval does not imply a 95% chance of containing the mean is because the confidence interval is an answer to a different question, so it is only the right answer when the answer to the two questions happens to have the same numerical solution. In short, credible and confidence intervals answer different questions from different perspectives; both are useful, but you need to choose the right interval for the question you actually want to ask. If you want an interval that admits an interpretation of a 95% (posterior) probability of containing the true value, then choose a credible interval (and, with it, the attendant conceptualization of probability), not a confidence interval. The thing you ought not to do is to adopt a different definition of probability in the interpretation than that used in the analysis. Thanks to @cardinal for his refinements! Here is a concrete example, from David MaKay's excellent book "Information Theory, Inference and Learning Algorithms" (page 464): Let the parameter of interest be $\theta$ and the data $D$, a pair of points $x_1$ and $x_2$ drawn independently from the following distribution: $p(x|\theta) = \left\{\begin{array}{cl} 1/2 & x = \theta,\\1/2 & x = \theta + 1, \\ 0 & \mathrm{otherwise}\end{array}\right.$ If $\theta$ is $39$, then we would expect to see the datasets $(39,39)$, $(39,40)$, $(40,39)$ and $(40,40)$ all with equal probability $1/4$. Consider the confidence interval $[\theta_\mathrm{min}(D),\theta_\mathrm{max}(D)] = [\mathrm{min}(x_1,x_2), \mathrm{max}(x_1,x_2)]$. Clearly this is a valid 75% confidence interval because if you re-sampled the data, $D = (x_1,x_2)$, many times then the confidence interval constructed in this way would contain the true value 75% of the time. Now consider the data $D = (29,29)$. In this case the frequentist 75% confidence interval would be $[29, 29]$. However, assuming the model of the generating process is correct, $\theta$ could be 28 or 29 in this case, and we have no reason to suppose that 29 is more likely than 28, so the posterior probability is $p(\theta=28|D) = p(\theta=29|D) = 1/2$. So in this case the frequentist confidence interval is clearly not a 75% credible interval as there is only a 50% probability that it contains the true value of $\theta$, given what we can infer about $\theta$ from this particular sample. Yes, this is a contrived example, but if confidence intervals and credible intervals were not different, then they would still be identical in contrived examples. Note the key difference is that the confidence interval is a statement about what would happen if you repeated the experiment many times, the credible interval is a statement about what can be inferred from this particular sample.
https://api.stackexchange.com
This is something that has been bugging me for a while, and I couldn't find any satisfactory answers online, so here goes: After reviewing a set of lectures on convex optimization, Newton's method seems to be a far superior algorithm than gradient descent to find globally optimal solutions, because Newton's method can provide a guarantee for its solution, it's affine invariant, and most of all it converges in far fewer steps. Why is second-order optimization algorithms, such as Newton's method not as widely used as stochastic gradient descent in machine learning problems?
Gradient descent maximizes a function using knowledge of its derivative. Newton's method, a root finding algorithm, maximizes a function using knowledge of its second derivative. That can be faster when the second derivative is known and easy to compute (the Newton-Raphson algorithm is used in logistic regression). However, the analytic expression for the second derivative is often complicated or intractable, requiring a lot of computation. Numerical methods for computing the second derivative also require a lot of computation -- if $N$ values are required to compute the first derivative, $N^2$ are required for the second derivative.
https://api.stackexchange.com
The process of sleep seems to be very disadvantageous to an organism as it is extremely vulnerable to predation for several hours at a time. Why is sleep necessary in so many animals? What advantage did it give the individuals that evolved to have it as an adaptation? When and how did it likely occur in the evolutionary path of animals?
This good non-scholarly article covers some of the usual advantages (rest/regeneration). One of the research papers they mentioned (they linked to press release) was Conservation of Sleep: Insights from Non-Mammalian Model Systems by John E. Zimmerman, Ph.D.; Trends Neurosci. 2008 July; 31(7): 371–376. Published online 2008 June 5. doi: 10.1016/j.tins.2008.05.001; NIHMSID: NIHMS230885. To quote from the press release: Because the time of lethargus coincides with a time in the round worms’ life cycle when synaptic changes occur in the nervous system, they propose that sleep is a state required for nervous system plasticity. In other words, in order for the nervous system to grow and change, there must be down time of active behavior. Other researchers at Penn have shown that, in mammals, synaptic changes occur during sleep and that deprivation of sleep results in a disruption of these synaptic changes.
https://api.stackexchange.com
I'll try to make this as brief as possible: Dissolved two teaspoons of table sugar (sucrose) in about 250ml water. Sipped it, and as expected it tasted sweet. I let the rest of it sit in the freezer overnight. Next day, I took out the frozen sugar solution and, well, licked it. Surprisingly, I could barely taste any sugar in it. It was almost as though I was licking regular ice. Why is it that I'm not able to perceive any sweetness here? I was under the impression that since the solution, being a homogeneous mixture of sugar and water, was sweet, the "popsicle" I made ought to taste sweet too (since the sugar would be evenly distributed over the volume of the ice).
Where is the sugar? When you freeze a dilute aqueous sugar solution pure water freezes first, leaving a more concentrated solution until you reach a high concentration of sugar called the eutectic concentration. Now you have the pure water that's frozen out, called proeutectic water, and the concentrated eutectic sugar solution from which the sugar is finally ready to freeze along with the water. Upon freezing this eutectic composition forms a two-phase eutectic mixture, in which the sugar may appear as veins or lamellae (like veins of some ores among Earth's rocks, though these typically form form a different process). If that structure is in the interior of the ice cube, likely since you cooled the solution from the outside, then licking the outside you got only the pure water proeutectic component. See for more about this process. Addendum: I tried this with store-bought fruit juice which was red in color. Poured it into an ice tray and froze it overnight in my household freezer. It appeared to be a homogeneous red mass and tasted sweet, but was also mushy implying some liquid was still present (after overnight freezing for an ice cube sized sample). The juice was roughly 10% sugar by weight.
https://api.stackexchange.com
Here is the article that motivated this question: Does impatience make us fat? I liked this article, and it nicely demonstrates the concept of “controlling for other variables” (IQ, career, income, age, etc) in order to best isolate the true relationship between just the 2 variables in question. Can you explain to me how you actually control for variables on a typical data set? E.g., if you have 2 people with the same impatience level and BMI, but different incomes, how do you treat these data? Do you categorize them into different subgroups that do have similar income, patience, and BMI? But, eventually there are dozens of variables to control for (IQ, career, income, age, etc) How do you then aggregate these (potentially) 100’s of subgroups? In fact, I have a feeling this approach is barking up the wrong tree, now that I’ve verbalized it. Thanks for shedding any light on something I've meant to get to the bottom of for a few years now...!
There are many ways to control for variables. The easiest, and one you came up with, is to stratify your data so you have sub-groups with similar characteristics - there are then methods to pool those results together to get a single "answer". This works if you have a very small number of variables you want to control for, but as you've rightly discovered, this rapidly falls apart as you split your data into smaller and smaller chunks. A more common approach is to include the variables you want to control for in a regression model. For example, if you have a regression model that can be conceptually described as: BMI = Impatience + Race + Gender + Socioeconomic Status + IQ The estimate you will get for Impatience will be the effect of Impatience within levels of the other covariates - regression allows you to essentially smooth over places where you don't have much data (the problem with the stratification approach), though this should be done with caution. There are yet more sophisticated ways of controlling for other variables, but odds are when someone says "controlled for other variables", they mean they were included in a regression model. Alright, you've asked for an example you can work on, to see how this goes. I'll walk you through it step by step. All you need is a copy of R installed. First, we need some data. Cut and paste the following chunks of code into R. Keep in mind this is a contrived example I made up on the spot, but it shows the process. covariate <- sample(0:1, 100, replace=TRUE) exposure <- runif(100,0,1)+(0.3*covariate) outcome <- 2.0+(0.5*exposure)+(0.25*covariate) That's your data. Note that we already know the relationship between the outcome, the exposure, and the covariate - that's the point of many simulation studies (of which this is an extremely basic example. You start with a structure you know, and you make sure your method can get you the right answer. Now then, onto the regression model. Type the following: lm(outcome~exposure) Did you get an Intercept = 2.0 and an exposure = 0.6766? Or something close to it, given there will be some random variation in the data? Good - this answer is wrong. We know it's wrong. Why is it wrong? We have failed to control for a variable that effects the outcome and the exposure. It's a binary variable, make it anything you please - gender, smoker/non-smoker, etc. Now run this model: lm(outcome~exposure+covariate) This time you should get coefficients of Intercept = 2.00, exposure = 0.50 and a covariate of 0.25. This, as we know, is the right answer. You've controlled for other variables. Now, what happens when we don't know if we've taken care of all of the variables that we need to (we never really do)? This is called residual confounding, and its a concern in most observational studies - that we have controlled imperfectly, and our answer, while close to right, isn't exact. Does that help more?
https://api.stackexchange.com
Knapsack problems are easily solved by dynamic programming. Dynamic programming runs in polynomial time; that is why we do it, right? I have read it is actually an NP-complete problem, though, which would mean that solving the problem in polynomial problem is probably impossible. Where is my mistake?
Knapsack problem is $\sf{NP\text{-}complete}$ when the numbers are given as binary numbers. In this case, the dynamic programming will take exponentially many steps (in the size of the input, i.e. the number of bits in the input) to finish $\dagger$. On the other hand, if the numbers in the input are given in unary, the dynamic programming will work in polynomial time (in the size of the input). This kind of problems is called weakly $\sf{NP\text{-}complete}$. $\dagger$: Another good example to understand the importance of the encoding used to give the input is considering the usual algorithms to see if a number is prime that go from $2$ up to $\sqrt{n}$ and check if any of them divide $n$. This is polynomial in $n$ but not necessarily in the input size. If $n$ is given in binary, the size of input is $\lg n$ and the algorithm runs in time $O(\sqrt{n}) = O(2^{\lg n/2})$ which is exponential in the input size. And the usual computational complexity of a problem is w.r.t. the size of the input. This kind of algorithm, i.e. polynomial in the largest number that is part of the input, but exponential in the input length is called pseudo-polynomial.
https://api.stackexchange.com
In Hamming's book, The Art of Doing Science and Engineering, he relates the following story: A group at Naval Postgraduate School was modulating a very high frequency signal down to where they could afford to sample, according to the sampling theorem as they understood it. But I realized if they cleverly sampled the high frequency then the sampling act itself would modulate (alias) it down. After some days of argument, they removed the rack of frequency lowering equipment, and the rest of the equipment ran better! Are there any other ways to use aliasing as a primary technique for processing a signal, as opposed to a side-effect to be avoided?
The quoted text in the question is a case of using bandpass sampling or undersampling. Here, to avoid aliasing distortion, the signal of interest must be bandpass. That means that the signal's power spectrum is only non-zero between $f_L < |f| < f_H$. If we sample the signal at a rate $f_s$, then the condition that the subsequent repeated spectra do not overlap means we can avoid aliasing. The repeated spectra happen at every integer multiple of $f_s$. Mathematically, we can write this condition for avoiding aliasing distortion as $$\frac{2 f_H}{n} \le f_s \le \frac{2 f_L}{n - 1}$$ where $n$ is an integer that satisfies $$1 \le n \le \frac{f_H}{f_H - f_L}$$ There are a number of valid frequency ranges you can do this with, as illustrated by the diagram below (taken from the wikipedia link above). In the above diagram, if the problem lies in the grey areas, then we can avoid aliasing distortion with bandpass sampling --- even though the sampled signal is aliased, we have not distorted the shape of the signal's spectrum.
https://api.stackexchange.com
EDIT: I've now asked a similar question about the difference between categories and sets. Every time I read about type theory (which admittedly is rather informal), I can't really understand how it differs from set theory, concretely. I understand that there is a conceptual difference between saying "x belongs to a set X" and "x is of type X", because intuitively, a set is just a collection of objects, while a type has certain "properties". Nevertheless, sets are often defined according to properties as well, and if they are, then I am having trouble understanding how this distinction matters in any way. So in the most concrete way possible, what exactly does it imply about $x$ to say that it is of type $T$, compared to saying that it is an element in the set $S$? (You may pick any type and set that makes the comparison most clarifying).
To understand the difference between sets and types, ones has to go back to pre-mathematical ideas of "collection" and "construction", and see how sets and types mathematize these. There is a spectrum of possibilities on what mathematics is about. Two of these are: We think of mathematics as an activity in which mathematical objects are constructed according to some rules (think of geometry as the activity of constructing points, lines and circles with a ruler and a compass). Thus mathematical objects are organized according to how they are constructed, and there are different types of construction. A mathematical object is always constructed in some unique way, which determines its unique type. We think of mathematics as a vast universe full of pre-existing mathematical objects (think of the geometric plane as given). We discover, analyze and think about these objects (we observe that there are points, lines and circles in the plane). We collect them into set. Usually we collect elements that have something in common (for instance, all lines passing through a given point), but in principle a set may hold together an arbitrary selection of objects. A set is specified by its elements, and only by its elements. A mathematical object may belong to many sets. We are not saying that the above possibilities are the only two, or that any one of them completely describes what mathematics is. Nevertheless, each view can serve as a useful starting point for a general mathematical theory that usefully describes a wide range of mathematical activities. It is natural to take a type $T$ and imagine the collection of all things that we can construct using the rules of $T$. This is the extension of $T$, and it is not $T$ itself. For instance, here are two types that have different rules of construction, but they have the same extension: The type of pairs $(n, p)$ where $n$ is constructed as a natural number, and $p$ is constructed as a proof demonstrating that $n$ is an even prime number larger than $3$. The type of pairs $(m, q)$ where $m$ is constructed as a natural number, and $q$ is constructed as a proof demonstrating that $m$ is an odd prime smaller than $2$. Yes, these are silly trivial examples, but the point stands: both types have nothing in their extension, but they have different rules of construction. In contrast, the sets $$\{ n \in \mathbb{N} \mid \text{$n$ is an even prime larger than $3$} \}$$ and $$\{ m \in \mathbb{N} \mid \text{$m$ is an odd prime smaller than $2$} \}$$ are equal because they have the same elements. Note that type theory is not about syntax. It is a mathematical theory of constructions, just like set theory is a mathematical theory of collections. It just so happens that the usual presentations of type theory emphasize syntax, and consequently people end up thinking type theory is syntax. This is not the case. To confuse a mathematical object (construction) with a syntactic expression that represents it (a term former) is a basic category mistake that has puzzled logicians for a long time, but not anymore.
https://api.stackexchange.com
I'm a bit confused about the concept of ground, and perhaps voltage as well, particularly when trying to analyze a circuit. When I learned about Ohm's law in grade school, I learned how to apply the law to calculate current, voltage, and resistance of simple circuits. For instance, if we were given the following circuit: We would be could be asked to calculate the current passing through the circuit. At the time, I'd simply compute (based on the rules given) 1.5V/1Ohms=1.5A. Later on, however, I learned that the reason the voltage of the resistor would be 1.5V is because voltage is really the difference in potential between two points, and that the difference of the voltage across the battery would be the same as that of the resistor (correct me if I'm mistaken), or 1.5V. I got confused, however, after the introduction of the concept of ground. The first time I tried to do the current calculation for a circuit similar to the previous circuit on a simulator, the program complained about not having a ground and "floating voltage sources". After a bit of searching, I learned that circuits need ground as a reference point or for safety reasons. It was mentioned in one explanation that one can pick any node for ground, although it's customary to design circuits so there is a "easy place" to pick ground. Thus for this circuit I picked ground at the bottom, but would it be okay to pick ground between the 7 ohm and 2 ohm resistor - or any other place? And what would be the difference when analyzing the circuit? I've read that there are 3 typical ground symbols with different meanings - chassis ground, earth ground, and signal ground. A lot of circuits I've seen used in exercises either use earth ground or signal ground. What purpose is there in using earth ground? What is the signal ground connected to? Another question: since the ground is at unknown potential, wouldn't there be current flowing to or from ground to the circuit? From what I've read we treat the ground as 0V, but wouldn't there be some sort of effect because of a difference in potential of the circuit and ground? Would the effect be different depending on what ground was used? Finally: In nodal analysis, one customarily picks a ground at the negative terminal of the battery. However, when there are multiple voltage sources, some of them are "floating". What meaning does the voltage of a floating voltage source have?
The first time I tried to do the current calculation for a circuit similar to the previous circuit on a simulator, the program complained about not having a ground and "floating voltage sources". Your simulator wants to be able to do its calculations and report out the voltages of each node relative to some reference, rather than have to report the difference between every possible pair of nodes. It needs you to tell it which node is the reference node. Other than that, for a well-designed circuit, the "ground" has no significance in the simulation. If you design a circuit where there is no dc path between two nodes, though, the circuit will be unsolvable. Typical SPICE-like simulators resolve this by connecting extra resistors, typically 1 GOhm, between every node and ground, so it is conceivable that the choice of ground node could artificially affect the results of a simulation of a very high-impedance circuit. I picked ground at the bottom, but would it be okay to pick ground between the 7 ohm and 2 ohm resistor - or any other place? And what would be the difference when analyzing the circuit? You can pick any node as your reference ground. Often we think ahead and pick a node that will eliminate terms for the equations (by setting them equal to 0), or simplify the schematic (by allowing us to indicate connections through a ground symbol instead of by a bunch of lines connecting together). I've read that there are 3 typical ground symbols with different meanings - chassis ground, earth ground, and signal ground. A lot of circuits I've seen used in exercises use earth ground or signal ground. What purpose is there in using earth ground? What is the signal ground connected to? Earth ground is used to indicate a connection to something that is physically connected to the ground beneath our feet. A wire leading through the building down to a copper rod driven into the ground, in a typical case. This ground is used for safety purposes. We assume that someone who handles our equipment will be connected to something like earth ground by their feet. So earth ground is the safest circuit node for them to touch, because it won't drive currents through their body. Chassis ground is just the potential of the case or enclosure of your circuit. For safety purposes it's often best for this to be connected to earth ground. But calling it "chassis" instead of "earth" means you haven't assumed that it is connected. Signal ground is often distinguished from earth ground (and partially isolated from it) to minimize the possibility that currents flowing through the earth ground wires will disturb measurements of the important signals. Another question: since the ground is at unknown potential, wouldn't there be current flowing to or from ground to the circuit? Remember, a complete circuit is required for current to flow. You would need connections to earth ground in two places for current to flow in and out of your circuit from earth ground. Realistically, you'd also need some kind of voltage source (a battery, or an antenna, or something) in one of those connection paths to have any sustained flow back and forth between your circuit and the earth. However, when there are multiple voltage sources, some of them are "floating". What meaning does the voltage of a floating voltage source have? If I have voltage source with value V between nodes a and b, it means that the voltage difference between a and b will be V volts. A perfect voltage source will generate whatever current is required to make this happen. If one of the nodes happens to be ground, that gives you immediately the value at the other node in your reference system. If neither of those nodes happens to be "ground" then you will need some other connections to establish the value of the voltages at a and b relative to ground.
https://api.stackexchange.com
I was coding a physics simulation, and noticed that I was using discrete time. That is, there was an update mechanism advancing the simulation for a fixed amount of time repeatedly, emulating a changing system. I though that was interesting, and now believe the real world must behave just like my program does. It is actually advancing forward in tiny but discrete time intervals?
As we cannot resolve arbitrarily small time intervals, what is ''really'' the case cannot be decided. But in classical and quantum mechanics (i.e., in most of physics), time is treated as continuous. Physics would become very awkward if expressed in terms of a discrete time: The discrete case is essentially untractable since analysis (the tool created by Newton, in a sense the father of modern physics) can no longer be applied. Edit: If time appears discrete (or continuous) at some level, it could still be continuous (or discrete) at higher resolution. This is due to general reasons that have nothing to do with time per se. I explain it by analogy: For example, line spectra look discrete, but upon higher resolution one sees that they have a line width with a physical meaning. Thus one cannot definitely resolve the question with finitely many observations of finite accuracy, no matter how contrived the experiment.
https://api.stackexchange.com
Is there any general guidelines on where to place dropout layers in a neural network?
In the original paper that proposed dropout layers, by Hinton (2012), dropout (with p=0.5) was used on each of the fully connected (dense) layers before the output; it was not used on the convolutional layers. This became the most commonly used configuration. More recent research has shown some value in applying dropout also to convolutional layers, although at much lower levels: p=0.1 or 0.2. Dropout was used after the activation function of each convolutional layer: CONV->RELU->DROP.
https://api.stackexchange.com
What are some recommended resources (books, tutorials, lectures, etc.) on digital signal processing, and how to begin working with it on a technical level?
My recommendation in terms of text books is Rick Lyons's Understanding DSP. My review of the latest edition is here. I, and many others from the ${\tt comp.dsp}$ community and elsewhere, have helped Rick revise parts of the text since the first edition. For self-study, I know of no better book. As an on-line, free resource, I recommend Steve Smith's book. Personally, I prefer Rick's style, but Steve's book as the advantage of online accessibility (and the online version is free!). Edit: Rick sent me some feedback that I thought I'd share here: For your colleagues that have a copy of my DSP book, I'll be happy to send them the errata for my book. All they have to do is send me an E-mail telling me (1) The Edition Number, and (2) the Printing Number of their copy of the book. The Printing Number can be found on the page just before the 'Dedication' page. My E-mail address is: R.Lyons [at] ieee.org I recommend that your colleagues have a look at: Rick also gave me a long list of online DSP references. There are way too many to put here. I will see about setting up a GoogleDocs version and re-post here later.
https://api.stackexchange.com
It is my understanding that metals are a crystal lattice of ions, held together by delocalized electrons, which move freely through the lattice (and conduct electricity, heat, etc.). If two pieces of the same metal are touched together, why don't they bond? It seems to me the delocalized electrons would move from one metal to the other, and extend the bond, holding the two pieces together. If the electrons don't move freely from one piece to the other, why would this not happen when a current is applied (through the two pieces)?
I think that mere touching does not bring the surfaces close enough. The surface of a metal is not perfect usually. Maybe it has an oxide layer that resists any kind of reaction. If the metal is extremely pure and if you bring two pieces of it extremely close together, then they will join together. It's also called cold welding. For more information: What prevents two pieces of metal from bonding? Cold Welding
https://api.stackexchange.com
I am working with over a million (long) reads, and aligning them to a large genome. I am considering running my alignment jobs in parallel, distributing horizontally across hundreds of nodes rather than trying to run a single job with dozens of cores. I would like to merge the sorted BAM files together for further downstream analysis. What is the most efficient way to do so while maintaining a valid file header and taking advantage of the fact that the input bam files are already sorted?
samtools merge merged.bam *.bam is efficient enough since the input files are sorted. You can get a bit faster with sambamba and/or biobambam, but they're not typically already installed and IO quickly becomes a bottleneck anyway.
https://api.stackexchange.com
First off, I hope this is the correct Stack Exchange board. My apologies if it is not. I am working on something that requires me to calibrate the camera. I have successfully implemented the code to do this in OpenCV (C++). I am using the inbuilt chessboard functions and a chessboard I have printed off. There are many tutorials on the internet which state to give more than one view of the chessboard and extract the corners from each frame. Is there an optimum set of views to give to the function to get the most accurate camera calibration? What affects the accuracy of the calibration? For instance, if I give it 5 images of the same view without moving anything it gives some straight results when I try and undistort the webcam feed. FYI to anyone visiting: I've recently found out you can get must better camera calibration by using a grid of asymmetric circles and the respective OpenCV function.
You have to take images for calibration from different points of view and angles, with as big difference between angles as possible (all three Euler angles should vary), but so that pattern diameter was still fitting to camera field of view. The more views are you using the better calibration will be. That is needed because during the calibration you detect focal length and distortion parameters, so to get them by least square method different angles are needed. If you arn't moving camera at all you are not getting new information and calibration is useless. Be aware, that you usually need only focal length, distortion parameters are usually negligible even for consumer cameras, web cameras and cell phone cameras. If you already know focal length from the camera specification you may not even need calibration. Distortion coefficient are more present in "special" cameras like wide-angle or 360°. Here is the Wikipedia entry about calibration. And here is non-linear distortion, which is negligible for most cameras.
https://api.stackexchange.com
In a multicore processor, what happens to the contents of a core's cache (say L1) when a context switch occurs on that cache? Is the behaviour dependent on the architecture or is it a general behaviour followed by all chip manufacturers?
That depends both on the processor (not just the processor series, it can vary from model to model) and the operating systems, but there are general principles. Whether a processor is multicore has no direct impact on this aspect; the same process could be executing on multiple cores simultaneously (if it's multithreaded), and memory can be shared between processes, so cache synchronization is unavoidable regardless of what happens on a context switch. When a processor looks up a memory location in the cache, if there is an MMU, it can use either the physical or the virtual address of that location (sometimes even a combination of both, but that's not really relevant here). With physical addresses, it doesn't matter which process is accessing the address, the contents can be shared. So there is no need to invalidate the cache content during a context switch. If the two processes map the same physical page with different attributes, this is handled by the MMU (acting as a MPU (memory protection unit)). The downside of a physically addressed cache is that the MMU has to sit between the processor and the cache, so the cache lookup is slow. L1 caches are almost never physically addresses; higher-level caches may be. The same virtual address can denote different memory locations in different processes. Hence, with a virtually addressed cache, the processor and the operating system must cooperate to ensure that a process will find the right memory. There are several common techniques. The context-switching code provided by the operating system can invalidate the whole cache; this is correct but very costly. Some CPU architectures have room in their cache line for an ASID (address space identifier) the hardware version of a process ID, also used by the MMU. This effectively separates cache entries from different processes, and means that two processes that map the same page will have incoherent views of the same physical page (there is usually a special ASID value indicating a shared page, but these need to be flushed if they are not mapped to the same address in all processes where they are mapped). If the operating system takes care that different processes use non-overlapping address spaces (which defeats some of the purpose of using virtual memory, but can be done sometimes), then cache lines remain valid. Most processors that have an MMU also have a TLB. The TLB is a cache of mappings from virtual addresses to physical addresses. The TLB is consulted before lookups in physically-addressed caches, to determine the physical address quickly when possible; the processor may start the cache lookup before the TLB lookup is complete, as often candidate cache lines can be identified from the middle bits of the address, between the bits that determine the offset in a cache line and the bits that determine the page. Virtually-addressed caches bypass the TLB if there is a cache hit, although the processor may initiate the TLB lookup while it is querying the cache, in case of a miss. The TLB itself must be managed during a context switch. If the TLB entries contain an ASID, they can remain in place; the operating system only needs to flush TLB entries if their ASID has changed meaning (e.g. because a process has exited). If the TLB entries are global, they must be invalidated when switching to a different context.
https://api.stackexchange.com
We were dealing with the Third Law of Thermodynamics in class, and my teacher mentioned something that we found quite fascinating: It is physically impossible to attain a temperature of zero kelvin (absolute zero). When we pressed him for the rationale behind that, he asked us to take a look at the graph for Charles' Law for gases: His argument is, that when we extrapolate the graph to -273.15 degrees Celsius (i.e. zero kelvin), the volume drops down all the way to zero; and "since no piece of matter can occupy zero volume ('matter' being something that has mass and occupies space), from the graph for Charles' Law, it is very clear that it is not possible to attain the temperature of zero kelvin". However, someone else gave me a different explanation: "To reduce the temperature of a body down to zero kelvin, would mean removing all the energy associated with the body. Now, since energy is always associated with mass, if all the energy is removed there won't be any mass left. Hence it isn't possible to attain absolute zero." Who, if anybody, is correct? Edit 1: A note-worthy point made by @Loong a while back: (From the engineer's perspective) To cool something to zero kelvin, first you'll need something that is cooler than zero kelvin. Edit 2: I've got an issue with the 'no molecular motion' notion that I seem to find everywhere (including @Ivan's fantastic answer) but I can't seem to get cleared. The notion: At absolute zero, all molecular motion stops. There's no longer any kinetic energy asscoiated with molecules/atoms. The problem? I quote Feynman: As we decrease the temperature, the vibration decreases and decreases until, at absolute zero, there is a minimum amount of motion that atoms can have, but not zero. He goes on to justify this by bringing in Heisenberg's Uncertainity Principle: Remember that when a crystal is cooled to absolute zero, the atoms do not stop moving, they still 'jiggle'. Why? If they stopped moving, we would know were they were and that they had they have zero motion, and that is against the Uncertainity Principle. We cannot know where they are and how fast they are moving, so they must be continually wiggling in there! So, can anyone account for Feynman's claim as well? To the not-so-hardcore student of physics that I am (high-schooler here), his argument seems quite convincing. So to make it clear; I'm asking for two things in this question: 1) Which argument is correct? My teacher's or the other guy's? 2) At absolute zero, do we have zero molecular motion as most sources state, or do atoms go on "wiggling" in there as Feynman claims?
There was a story in my days about a physical chemist who was asked to explain some effect, illustrated by a poster on the wall. He did that, after which someone noticed that the poster was hanging upside down, so the effect appeared reversed in sign. Undaunted, the guy immediately explained it the other way around, just as convincingly as he did the first time. Cooking up explanations on the spot is a respectable sport, but your teacher went a bit too far. What's with that Charles' law? See, it is a gas law; it is about gases. And even then it is but an approximation. To make it exact, you have to make your gas ideal, which can't be done. As you lower the temperature, all gases become less and less ideal. And then they condense, and we're left to deal with liquids and solids, to which the said law never applied, not even as a very poor approximation. Appealing to this law when we are near the absolute zero is about as sensible as ruling out certain reaction mechanism on the grounds that it requires atoms to move faster than allowed by the road speed limit in the state of Hawaii. The energy argument is even more ridiculous. We don't have to remove all energy, but only the kinetic energy. The $E=mc^2$ part remains there, so the mass is never going anywhere. All that being said, there is no physical law forbidding the existence of matter at absolute zero. It's not like its existence will cause the world to go down with error 500. It's just that the closer you get to it, the more effort it takes, like with other ideal things (ideal vacuum, ideally pure compound, crystal without defects, etc). If anything, we're doing a pretty decent job at it. Using sophisticated techniques like laser cooling or magnetic evaporative cooling, we've long surpassed the nature's record in coldness.
https://api.stackexchange.com
I'm looking for tools to check the quality of a VCF I have of a human genome. I would like to check the VCF against publicly known variants across other human genomes, e.g. how many SNPs are already in public databases, whether insertions/deletions are at known positions, insertion/deletion length distribution, other SNVs/SVs, etc.? I suspect that there are resources from previous projects to check for known SNPs and InDels by human subpopulations. What resources exist for this, and how do I do it?
To achieve (at least some of) your goals, I would recommend the Variant Effect Predictor (VEP). It is a flexible tool that provides several types of annotations on an input .vcf file. I agree that ExAC is the de facto gold standard catalog for human genetic variation in coding regions. To see the frequency distribution of variants by global subpopulation make sure "ExAC allele frequencies" is checked in addition to the 1000 genomes. Output in the web-browser: If you download the annotated .vcf, frequencies will be in the INFO field: ##INFO=<ID=CSQ,Number=.,Type=String,Description="Consequence annotations from Ensembl VEP. Format: Allele|Consequence|IMPACT|SYMBOL|Gene|Feature_type|Feature|BIOTYPE|EXON|INTRON|HGVSc|HGVSp|cDNA_position|CDS_position|Protein_position|Amino_acids|Codons|Existing_variation|DISTANCE|STRAND|FLAGS|SYMBOL_SOURCE|HGNC_ID|TSL|SIFT|PolyPhen|AF|AFR_AF|AMR_AF|EAS_AF|EUR_AF|SAS_AF|AA_AF|EA_AF|ExAC_AF|ExAC_Adj_AF|ExAC_AFR_AF|ExAC_AMR_AF|ExAC_EAS_AF|ExAC_FIN_AF|ExAC_NFE_AF|ExAC_OTH_AF|ExAC_SAS_AF|CLIN_SIG|SOMATIC|PHENO|MOTIF_NAME|MOTIF_POS|HIGH_INF_POS|MOTIF_SCORE_CHANGE The previously mentioned Annovar can also annotate with ExAC allele frequencies. Finally, should mention the newest whole-genome resource, gnomAD.
https://api.stackexchange.com
I have a dataset that has both continuous and categorical data. I am analyzing by using PCA and am wondering if it is fine to include the categorical variables as a part of the analysis. My understanding is that PCA can only be applied to continuous variables. Is that correct? If it cannot be used for categorical data, what alternatives exist for their analysis?
Although a PCA applied on binary data would yield results comparable to those obtained from a Multiple Correspondence Analysis (factor scores and eigenvalues are linearly related), there are more appropriate techniques to deal with mixed data types, namely Multiple Factor Analysis for mixed data available in the FactoMineR R package (FAMD()). If your variables can be considered as structured subsets of descriptive attributes, then Multiple Factor Analysis (MFA()) is also an option. The challenge with categorical variables is to find a suitable way to represent distances between variable categories and individuals in the factorial space. To overcome this problem, you can look for a non-linear transformation of each variable--whether it be nominal, ordinal, polynomial, or numerical--with optimal scaling. This is well explained in Gifi Methods for Optimal Scaling in R: The Package homals, and an implementation is available in the corresponding R package homals.
https://api.stackexchange.com
I'm considering learning a new language to use for numerical/simulation modelling projects, as a (partial) replacement for the C++ and Python that I currently use. I came across Julia, which sounds kind of perfect. If it does everything it claims, I could use it to replace both C++ and Python in all my projects, since it can access high-level scientific computing library code (including PyPlot) as well as running for loops at a similar speed to C. I would also benefit from things like proper coroutines that don't exist in either of the other languages. However, it's a relatively new project, currently at version 0.x, and I found various warnings (posted at various dates in the past) that it's not quite ready for the day to day use. Consequently, I would like some information about the status of the project right now (February 2014, or whenever an answer is posted), in order to help me assess whether I personally should consider investing the time to learn this language at this stage. I would appreciate answers that focus on specific relevant facts about the Julia project; I'm less interested in opinions based on experience with other projects. In particular, a comment by Geoff Oxberry suggests that the Julia API is still in a state of flux, requiring the code to be updated when it changes. I would like to get an idea of the extent to which this is the case: which areas of the API are stable, and which are likely to change? I guess typically I would mostly be doing linear algebra (e.g. solving eigenproblems), numerical integration of ODEs with many variables, and plotting using PyPlot and/or OpenGL, as well as low-level C-style number crunching (e.g. for Monte Carlo simulations). Is Julia's library system fully developed in these areas? In particular, is the API more or less stable for those types of activities, or would I find that my old code would tend to break after upgrading to a new version of Julia? Finally, are there any other issues that would be worth considering in deciding whether to use Julia for serious work at the present time?
Julia, at this point (May 2019, Julia v1.1 with v1.2 about to come out) is quite mature for scientific computing. The v1.0 release signified an end to yearly code breakage. With that, a lot of scientific computing libraries have had the time to simply grow without disruption. A broad overview of Julia packages can be found at pkg.julialang.org. For core scientific computing, the DifferentialEquations.jl library for differential equations (ODEs, SDEs, DAEs, DDEs, Gillespie simulations, etc.), Flux.jl for neural networks, and the JuMP library for mathematical programming (optimization: linear, quadratic, mixed integer, etc. programming) are three of the cornerstones of the scientific computing ecosystem. The differential equation library in particular is far more developed than what you'd see in other languages, with a large development team implementing features like EPIRK integrators, Runge-Kutta-Nystrom, Stiff/Differential-Algebraic delay differential equation, and adaptive time stiff stochastic differential equation integrators, along with a bunch of other goodies like adjoint sensitivity analysis, chemical reaction DSLs, matrix-free Newton-Krylov, and full (data transfer free) GPU compatibility, with training of neural differential equations, all with fantastic benchmark results (disclaimer: I am the lead developer). The thing that is a little mind-boggling about the matured Julia ecosystem is its composibility. Essentially, when someone builds a generic library function like those in DifferentialEquations.jl, you can use any AbstractArray/Number type to generate new code on the fly. So for example, there is a library for error propagation (Measurements.jl) and when you stick it in the ODE solver, it automatically compiles a new version of the ODE solver which does error propagation without parameter sampling. Because of this, you may not find some features documented because the code for the features generates itself, and so you need to think more about library composition. One of the ways where composition is most useful is in linear algebra. The ODE solvers for example allow you to specify jac_prototype, letting you give it the type for the Jacobian that will be used internally. Of course there's things in the LineraAlgebra standard library like Symmetric and Tridiagonal you can use here, but given the utility of composibility in type generic algorithms, people have by now gone and built entire array type libraries. BandedMatrices.jl and BlockBandedMatrices.jl are libraries which define (Block) banded matrix types which have fast lu overloads, making them a nice way to accelerate the solution of stiff MOL discretizations of systems of partial differential equations. PDMats.jl allows for the specification of positive-definite matrices. Elemental.jl allows you to define a distributed sparse Jacobian. CuArrays.jl defines arrays on the GPU. Etc. Then you have all of your number types. Unitful.jl does unit checking at compile time so it's an overhead-free units library. DoubleFloats.jl is a fast higher precision library, along with Quadmath.jl and ArbFloats.jl. ForwardDiff.jl is a library for forward-mode automatic differentiation which uses Dual number arithmetic. And I can keep going listing these out. And yes, you can throw them into sufficiently generic Julia libraries like DifferentialEquations.jl to compile a version specifically optimized for these number types. Even something like ApproxFun.jl which is functions as algebraic objects (like Chebfun) works with this generic system, allowing the specification of PDEs as ODEs on scalars in a function space. Given the advantages of composibility and the way that types can be use to generate new and efficient code on generic Julia functions, there has been a lot of work to get implementations of core scientific computing functionality into pure Julia. Optim.jl for nonlinear optimization, NLsolve.jl for solving nonlinear systems, IterativeSolvers.jl for iterative solvers of linear systems and eigensystems, BlackBoxOptim.jl for black-box optimization, etc. Even the neural network library Flux.jl just uses CuArrays.jl's automatic compilation of code to the GPU for its GPU capabilities. This composibility was the core of what created things like neural differential equations in DiffEqFlux.jl. Probabilistic programming languages like Turing.jl are also quite mature now and make use of the same underlying tooling. Since Julia's libraries are so fundamentally based on code generation tools, it should be no surprised that there's a lot of tooling around code generation. Julia's broadcast system generates fused kernels on the fly which are overloaded by array types to give a lot of the features mentioned above. CUDAnative.jl allows for compiling Julia code to GPU kernels. ModelingToolkit.jl automatically de-sugars ASTs into a symbolic system for transforming mathematical code. Cassette.jl lets you "overdub" someone else's existing function, using rules to change their function before compile time (for example: change all of their array allocations to static array allocations and move operations to the GPU). This is more advanced tooling (I don't expect everyone doing scientific computing to take direct control of the compiler), but this is how a lot of the next generation tooling is being built (or rather, how the features are writing themselves). As for parallelism, I've mentioned GPUs, and Julia has built-in multithreading and distributed computing. Julia's multithreading will very soon use a parallel-tasks runtime (PARTR) architecture which allows for automated scheduling of nested multithreading. If you want to use MPI, you can just use MPI.jl. And then of course, the easiest way to make use of it all is to just use an AbstractArray type setup to use the parallelism in its operations. Julia also has the basic underlying ecosystem you would expect of a general purpose language used for scientific applications. It has the Juno IDE with a built-in debugger with breakpoints, it has Plots.jl for making all sorts of plots. A lot of specific tools are nice as well, like Revise.jl automatically updates your functions/library when a file saves. You have your DataFrames.jl, statistics libraries, etc. One of the nicest libraries is actually Distributions.jl which lets you write algorithms generic to the distribution (for example: rand(dist) takes a random number of whatever distribution was passed in), and there's a whole load of univariate and multivariate distributions (and of course dispatch happens at compile time, making this all as fast as hardcoding a function specific to the distribution). There is a bunch of data handling tooling, web servers, etc. you name it. At this point it's mature enough that if there's a basic scientific thing and you'd expect for it to exist, you just Google it with .jl or Julia and it'll show up. Then there's a few things to keep in mind on the horizon. PackageCompiler is looking to build binaries from Julia libraries, and it already has some successes but needs more development. Makie.jl is a whole library for GPU-accelerated plotting with interactivity, and it still needs some more work but it's really looking to become the main plotting library in Julia. Zygote.jl is a source-to-source automatic differentiation library which doesn't have the performance issues of a tracing-based AD (Flux's Tracker, PyTorch, Jax), and that is looking to work on all pure Julia codes. Etc. In conclusion, you can find a lot of movement in a lot of places, but in most areas there is already a solid matured library. It's no longer at a place where you ask "will it be adopted?": Julia has been adopted by enough people (millions of downloads) that it has the momentum to stay around for good. It has a really nice community, so if you ever just want to shoot the breeze and talk about parallel computing or numerical differential equations, some of the best chat rooms for that are in the Julialang Slack. Whether it's a language you should learn is a personal question, and whether it's the right language for your project is a technical question, and those are different. But is it a language that has matured and has the backing of a large consistent group of developers? That seems to be an affirmative yes.
https://api.stackexchange.com
Software is a fundamental part of computational science, and is increasingly recognized as an essential part of the scientific record. Given the value of using existing and well-tested code, it seems worthwhile to communicate the existence of useful codes as widely as possible and credit their creators. In an academic setting, this means publishing some papers that are primarily focused on software. Where can one publish scholarly works whose primary focus is computational software? To be completely clear, I'm referring to works that may not include any new mathematics, algorithms, etc. -- they are really focused on software. I would also be interested in hearing from those who have submitted such papers to these journals, what the experience was like and which venues they recommend. Summary of answers given: Transactions on Mathematical Software Scientific Programming SIAM Journal on Scientific Computing (SISC) Software section The Archive of Numerical Software Open Research Computation Computer Physics Communications Advances in Engineering Software Journal of Statistical Software Journal of Chemical Theory and Computation Source Code for Biology and Medicine PLoS ONE International Journal of Quantum Chemistry Epidemiology Computing in Science & Engineering Journal of Computational Chemistry Geoscientific Model Development Journal of Machine Learning Research Mathematical Programming Computation Journal of Open Source Software SoftwareX
There are some other application-specific journals to list: such as Journal of Computational Physics or Computer Physics Communications, that accept articles both about algorithms as well as the software used to implement them. If you're in the chemistry field, Journal of Chemical Theory and Computation might be another journal to consider. All of these do allow packages to be published—I've seen codes I've used discussed in them. Computers and Chemical Engineering does allow software implementation papers, but they need to do something original—it can't be an "incremental advance" paper.
https://api.stackexchange.com
Both $\ce{SF6}$ and $\ce{SH6}$ and $\ce{SF4}$ and $\ce{SH4}$ have the same central atom and the same hybridization, but my teacher specifically mentioned that $\ce{SH6}$ and $\ce{SH4}$ don't exist. I've looked everywhere but I can't figure out why? I'd appreciate some insight into the problem.
TL;DR Fluorine is electronegative and can support the extra negative charge that is dispersed on the six X atoms in $\ce{SX6}$, whereas hydrogen cannot. First, let's debunk a commonly taught myth, which is that the bonding in $\ce{SF6}$ involves promotion of electrons to the 3d orbitals with a resulting $\mathrm{sp^3d^2}$ hybridisation. This is not true. Here's a recent and arguably more understandable reference: J. Chem. Educ. 2020, 97 (10), 3638–3646 which explains this. Quoting: The natural ionicity, $i_\ce{SF}$, of each $\ce{S-F}$ bond [in $\ce{SF6}$] is 0.86, indicating a rather ionic σ bond. Each fluorine has an average charge of $−0.45$, resulting in a sulfur center of charge $+2.69$. [...] In summary, the electronic structure of this system is best described as a sulfur center with a charge somewhere between $2+$ and $3+$; the corresponding negative charge is distributed among the equivalent fluorine atoms. Shown in Figure 12 is the orbital occupation of the sulfur center, $\ce{3s^1 3p^{2.1} 3d^{0.19} 5p^{0.03} 4f^{0.01}}$. The minimal occupation of d-type orbitals eliminates the possibility of $\mathrm{sp^3d^2}$ hybridization. If not via d-orbital bonding, how does one then describe the structure of $\ce{SF6}$? I'll present an LCAO-MO answer. Here's a "simple" MO diagram (I won't go through the details of how to construct it). It's actually fairly similar to that of an octahedral transition metal complex, except that here the 3s and 3p orbitals on sulfur are below the 3d orbitals. Just for the sake of counting electrons, I treated the compound as being "fully ionic", i.e. $\ce{S^6+} + 6\ce{F-}$. So sulfur started off with 0 valence electrons, and each fluorine started off with 2 electrons in its σ orbitals. I've also neglected the π contribution to bonding, so the fluorine lone pairs don't appear in the diagram. You'll see that, for a total of six $\ce{S-F}$ bonds, we only have four pairs of electrons in bonding MOs. The other two pairs of electrons reside in the $\mathrm{e_g}$ MOs, which are nonbonding and localised on fluorine. If we want to assign a formal charge to sulfur based on this diagram, it would be +2, because there are only actually four bonds. We could perhaps use Lewis diagrams to represent it this way: The "hypervalent" resonance form contributes rather little and does not rely on invoking d-orbital participation; see Martin's comment on my answer below for greater detail about the resonance contributions. I am guessing that its existence can be mostly attributed to negative hyperconjugation, although I'm not 100% sure on this. The trans and cis resonance forms are not equal, so their contribution is not the same, but the contribution from each individual trans resonance form has to be the same by symmetry. Overall, the six fluorines in $\ce{SF6}$ have to be equivalent by the octahedral symmetry of the molecule. You could run a $\ce{^19F}$ NMR of the compound and it should only give you one peak. (An alternative way of looking at it is that two of the $\ce{S-F}$ bonds are "true" 2c2e bonds, and that the other four $\ce{S-F}$ "bonds" are in fact just a couple of 3c4e bonds, but I won't go into that. For more information on multi-centre bonds, this article is a nice introduction: J. Chem. Educ. 1998, 75, 910; see also refs. 12 and 13 in that article.) Right from the outset, we can see why $\ce{SH6}$ is not favoured as much. If we use the same framework to describe the bonding in $\ce{SH6}$, then those "correct" resonance forms that we drew would involve $\ce{H-}$. I'll leave it to the reader to figure out whether $\ce{F-}$ or $\ce{H-}$ is more stable. Alternatively, if you want to stick to the MO description, the idea is that in $\ce{SH6}$, the relatively high energy of H1s compared to F2p will lead to the nonbonding $\mathrm{e_g}$ orbitals being relatively higher in energy. All things being equal, it's less favourable for a higher-energy orbital to be occupied, and $\ce{SH6}$ would therefore be very prone to losing these electrons, i.e. being oxidised. In fact, if we do remove those four electrons from the $\mathrm{e_g}$ orbitals, then it's possible that these six-coordinate hydrides could form. But obviously we might not want to have a $\ce{SH6^4+}$ molecule on the loose. It'll probably lose all of its protons in a hurry to get back to being $\ce{H2S + 4H+}$. Is there anything better? Well, there's the species $\ce{CH6^2+}$, which is methane protonated twice. It's valence isoelectronic with $\ce{SH6^4+}$, and if you want to read about it, here's an article: J. Am. Chem. Soc. 1983, 105, 5258. While it's hardly the most stable molecule on the planet, it's certainly more plausible than $\ce{SH6}$. Now, just to come back to where we started from: d-orbital participation. Yes, there is an $\mathrm{e_g}$ set of d orbitals that can overlap with the apparently "nonbonding" $\mathrm{e_g}$ linear combination of F2p orbitals, thereby stabilising it. It is true that some degree of this does happen. The issue is how much. Considering the fairly large energy gap between the $\mathrm{e_g}$ orbitals, this interaction is bound to be fairly small, and is nowhere near enough to justify a $\mathrm{sp^3d^2}$ description of it; Martin's comments contain more details.
https://api.stackexchange.com
Please be kind, I am an electronics nub. This is in reference to getting an LED to emit photons. From what I read (Getting Started in Electronics - Forrest Mims III and Make: Electronics) electrons flow from the more negative side to the more positive side. In an example experiment (involving a primary dry cell, a SPDT switch, a resistor and an LED) it states that the resistor MUST be connected to the anode of the LED. In my mind, if the electrons flow from negative to positive, wouldn't the electron flow run through the LED before the resistor; thereby making the resistor pointless?
The resistor can be on either side of the LED, but it must be present. When two or more components are in series, the current will be the same through all of them, and so it doesn't matter which order they are in. I think the way to read "the resistor must be connected to the anode" as "the resistor cannot be omitted from the circuit."
https://api.stackexchange.com
I've thoroughly read the Wikipedia article on DNA sequencing and can't get one thing. There's some hardcore chemistry involved in the process that somehow splits the DNA and then isolates its parts. Yet DNA sequencing is considered to be a very computationally-intensive process. I don't get what exactly is being computed there - what data comes into computers and what computers compute specifically. What exactly is being computed there? Where do I get more information on this?
Computers are used in several steps of sequencing, from the raw data to finished sequence: Image processing Modern sequencers usually use fluorescent labelling of DNA fragments in solution. The fluorescence encodes the different nucleobase (= “base”) types (generally called A, C, G and T). To achieve high throughput, millions of sequencing reactions are performed in parallel in microscopic quantities on a glass chip, and for each micro-reaction, the label needs to be recorded at each step in the reaction. This means: the sequencer takes progressive digital photographs of the chip containing the sequencing reagent. These photos have differently coloured pixels which need to be told apart and assigned a specific colour value. As can be seen, this (strongly magnified; the image is < 100 µm across!) image fragment is fuzzy and many of the dots overlap. This makes it hard to determine which colour to assign to which pixel (though more recent versions of the sequencing machine have improved focussing systems, and the image is consequently crisper). Base calling One such image is registered for each step of the sequencing process, yielding one image for each base of the fragments. For a fragment of length 75, that’d be 75 images. Once you have analysed the images, you get colour spectra for each pixel across the images. The spectra for each pixel correspond to one sequence fragment (often called a “read”) and are considered separately. So for each fragment you get such a spectrum: (This image is generated by an alternative sequencing process called Sanger sequencing but the principle is the same.) Now you need to decide which base to assign for each position based on the signal (“base calling”). For most positions this is fairly easy but sometimes the signal overlaps or decays significantly. This has to be considered when deciding the base calling quality (i.e. which confidence you assign to your decision for a given base). Doing this for each read yields up to billions of reads, each representing a short fragment of the original DNA that you sequenced. Most bioinformatics analysis starts here; that is, the machines emit files containing the short sequence fragments. Now we need to make a sequence from them. Read mapping and assembly The key point that allows retrieving the original sequence from these small fragments is the fact that these fragments are (non-uniformly) randomly distributed over the genome, and they are overlapping. The next step depends on whether you have a similar, already sequenced genome at hand. Often, this is the case. For instance, there is a high-quality “reference sequence” of the human genome and since all the genomic sequences of all humans are ~99.9% identical (depending on how you count), you can simply look where your reads align to the reference. Read mapping This is done to search for single changes between the reference and your currently studied genome, for example to detect mutations that lead to diseases. So all you have to do is to map the reads back to their original location in the reference genome (in blue) and look for differences (such as base pair differences, insertions, deletions, inversions …). Two points make this hard: You have got billions (!) of reads, and the reference genome is often several gigabytes large. Even with the fastest thinkable implementation of string search, this would take prohibitively long. The strings don’t match precisely. First of all, there are of course differences between the genomes – otherwise, you wouldn’t sequence the data at all, you’d already have it! Most of these differences are single base pair differences – SNPs (= single nucleotide polymorphisms) – but there are also larger variations that are much harder to deal with (and they are often ignored in this step). Furthermore, the sequencing machines aren’t perfect. A lot of things influence the quality, first and foremost the quality of the sample preparation, and minute differences in the chemistry. All this leads to errors in the reads. In summary, you need to find the position of billions of small strings in a larger string which is several gigabytes in size. All this data doesn’t even fit into a normal computer’s memory. And you need to account for mismatches between the reads and the genome. Unfortunately, this still doesn’t yield the complete genome. The main reason is that some regions of the genome are highly repetitive and badly conserved, so that it’s impossible to map reads uniquely to such regions. As a consequence, you instead end up with distinct, contiguous blocks (“contigs”) of mapped reads. Each contig is a sequence fragment, like reads, but much larger (and hopefully with less errors). Assembly Sometimes you want to sequence a new organism so you don’t have a reference sequence to map to. Instead, you need to do a de novo assembly. An assembly can also be used to piece contigs from a mapped reads together (but different algorithms are used). Again we use the property of the reads that they overlap. If you find two fragments which look like this: ACGTCGATCGCTAGCCGCATCAGCAAACAACACGCTACAGCCT ATCCCCAAACAACACGCTACAGCCTGGCGGGGCATAGCACTGG You can be quite certain that they overlap like this in the genome: ACGTCGATCGCTAGCCGCATCAGCAAACAACACGCTACAGCCT ATCCCCATTCAACACGCTA-AGCTTGGCGGGGCATACGCACTG (Notice again that this isn’t a perfect match.) So now, instead of searching for all the reads in a reference sequencing, you search for head-to-tail correspondences between reads in your collection of billions of reads. If you compare the mapping of a read to searching a needle in a haystack (an often used analogy), then assembling reads is akin to comparing all the straws in the haystack to each other straw, and putting them in order of similarity.
https://api.stackexchange.com
In physics, "almost everything is already discovered, and all that remains is to fill a few unimportant holes." (See Jolly.) Therefore, on Physics SE, people are veering off into different directions: biology, for example. Thus, it happens that a question about bicycles generates some discussion about evolution in biology and animals with wheels. Three explanations are offered for the apparent lack of wheely animals (also on Wikipedia, where, by the way, most Physics SE questions are answered perfectly). Evolutionary constraints: "[A] complex structure or system will not evolve if its incomplete form provides no benefit to an organism." Developmental and anatomical constraints. Wheels have significant disadvantages (e.g., when not on roads). Now, I suggest that all three can be "solved". With time. With a symbiotic relationship between a wheel-like animal and a "driver"-like animal, although this gets awfully close to a "driver"-animal to jump onto an actual (man-made) wheel. (So, perhaps, you can suggest a better loophole around this constraint.) Roads are presumably not the only ecological niche where animals with wheels could thrive. I'm thinking of frozen lakes, although there skates would be better than wheels. What, therefore, is the explanation for there not being any wheeled animals? Please consider, in your answer, the counterfactual: What assumption of yours would be falsified once a wheely animal is discovered?
Wheels are possible on the molecular level — bacterial flagella are rotating cores inside a molecular motor, but wheels larger than the flagellum have not really been found. Defining a wheel as a freely rotating joint that can rotate indefinitely in one direction, a single animal with a wheel is an improbable* development that would require a single animal have two separate parts (axle/wheel and body). [*read as: pretty much impossible] It's hard to imagine how such a thing could evolve. A wheel and axle would need to be made of living tissue, otherwise it would be vulnerable to wear and tear. Wheels also have problems going over uneven terrain, which is really all terrain animals live in. It's difficult to imagine what sort of selection conditions would be strong enough to push animals away from legs. If you include driver-vehicle symbionts where the 'car' and 'wheel' are actually two animals, then they have evolved. Parasites can have all sorts of symbiotic control over their victims including as means of transport. The Jewel Wasp is one which is the most suggestive of what you may be thinking. The wasp stings its victim (a cockroach) in the thorax to immobilize the animal and then again just behind its head. After this, the wasp can ride the beetle, steering it by holding its antennae back to its nest where the roach is immobilized to feed the wasp larvae there. (see section "Pet cockaroaches" in this reference.) As to the three schools of thought you added to the question, I would probably rather say there were two strong arguments against. The first is whether there is an evolutionary path to wheels (argument 1 in your question), which I doubt. Given even a large amount of evolutionary time you will not see a naked human being able to fly on their own power. Too many structural characteristics of the body plan have been made to all be reversed so that wings or other means of aerial conveyance will show up. The same can be said for wheels when the body plans have fins/legs/and wings already. Argument 3, which I also tend to agree with, is perhaps more convincing. By the time a pair of animals makes a symbiotic relationship to make an axel, or a single macroscopic animal evolves wheels, they will literally develop legs and walk away. When life came onto the land this happened, and since then it's happened several times. It's sort of like saying that the random movement of water molecules might line up to run a stream uphill. Its possible, but there's just such a strong path downwards, that the statistical chances of you seeing it happen are nil. This is a hypothetical case, but arguing this in a convincing way I think you would need to describe: a) an environment with a selective advantage for wheels to evolve over legs or other similar adaptations, perhaps based on the energy efficiency of wheels; b) a physiological model for the wheels that convey a reasonable lifestyle for the wheel. There are lots of questions that would need to be satisfied in our thought experiment. Here are some: "the symbiotic wheel would be spinning constantly; if it died the driver creature would be completely defenseless"; "if the ground were bumpy, all these wheeled animals would get eaten"; "How would the wheel symbiont eat while its spinning all the time? Only fed by the driver? Even symbionts such as barnacles or lampreys on the flanks of sharks still have their own ability to feed." Many similar questions of this sort ensue where there are many disadvantages which outweigh advantages for animals. e.g. "why are all the flying animals and fish and plants even more similar to airplanes than helicopters?" Sorry if I seem negative, but way back in grad school I actually did go over some of these angles. UPDATE: First Gear found in a Living Creature. A European plant-hopper insect with one of the largest accelerations known in biology has been found to have gears! (There's a movie on the article page. ) The little bug has gears in its exoskeleton that synchronize its two jumping legs. Once again selection surprises. The gears themselves are an oddity. With gear teeth shaped like cresting waves, they look nothing like what you'd find in your car or in a fancy watch. There could be two reasons for this. Through a mathematical oddity, there is a limitless number of ways to design intermeshing gears. So, either nature evolved one solution at random, or, as Gregory Sutton, coauthor of the paper and insect researcher at the University of Bristol, suspects, the shape of the issus's gear is particularly apt for the job it does. It's built for "high precision and speed in one direction," he says. The gears do not rotate 360 degrees, but appear on the surface of two joints to synchronize them as they wind up like a circular spring. The gear itself is not living tissue, so the bug solves the problem of regenerating the gear by growing a new set when it molts (i.e. gears that continually regenerate and heal are still unknown). It also does not keep its gears throughout its lifecycle. So the arguments here still stand; the exception still supports the rule. Additional Note: In his book "the God Delusion" (Chapter 4 somewhere) Richard Dawkins muses that the flagellar motor is the only example of a freely rotating axle that he knows of, and that a wheeled animal might be a true example of 'irreducibly complexity' in biology... but the fact that there is no such example is probably to the point.
https://api.stackexchange.com
In a recent article of Amstat News, the authors (Mark van der Laan and Sherri Rose) stated that "We know that for large enough sample sizes, every study—including ones in which the null hypothesis of no effect is true — will declare a statistically significant effect.". Well, I for one didn't know that. Is this true? Does it mean that hypothesis testing is worthless for large data sets?
It is not true. If the null hypothesis is true then it will not be rejected more frequently at large sample sizes than small. There is an erroneous rejection rate that's usually set to 0.05 (alpha) but it is independent of sample size. Therefore, taken literally the statement is false. Nevertheless, it's possible that in some situations (even whole fields) all nulls are false and therefore all will be rejected if N is high enough. But is this a bad thing? What is true is that trivially small effects can be found to be "significant" with very large sample sizes. That does not suggest that you shouldn't have such large samples sizes. What it means is that the way you interpret your finding is dependent upon the effect size and sensitivity of the test. If you have a very small effect size and highly sensitive test you have to recognize that the statistically significant finding may not be meaningful or useful. Given some people don't believe that a test of the null hypothesis, when the null is true, always has an error rate equal to the cutoff point selected for any sample size, here's a simple simulation in R proving the point. Make N as large as you like and the rate of Type I errors will remain constant. # number of subjects in each condition n <- 100 # number of replications of the study in order to check the Type I error rate nsamp <- 10000 ps <- replicate(nsamp, { # population mean = 0, sd = 1 for both samples, therefore, no real effect y1 <- rnorm(n, 0, 1) y2 <- rnorm(n, 0, 1) tt <- t.test(y1, y2, var.equal = TRUE) tt$p.value }) sum(ps < .05) / nsamp # ~ .05 no matter how big n is. Note particularly that it is # not an increasing value always finding effects when n is very large.
https://api.stackexchange.com
It is known that HIV is usually transmitted by direct blood or body fluid contact between an infected individual and a healthy person (like blood transfusion or needle sharing): Suppose a mosquito bites an individual suffering from AIDS and in the process sucks up some T cells infected with HIV along with RBCs. Then it bites another person not suffering from the disease, and transfers these infected T cells. Isn't there a high probability of the second individual contracting HIV?
No, this is not possible. There are a few reasons for that, but most important are that the only thing a mosquito injects is its own saliva, while the blood is sucked into the stomach where it is digested. To be able to infect other people HIV would need to be able to leave the gut intact and then also be able to replicate in the mosquitos which it cannot do, due to the missing of the CD4 antigen on the surface of the insect cells. These are needed as a surface receptor for the virus to bind and enter the cells. This is also true for other blood sucking insects like bed bugs or fleas. Other pathogens can do this, examples would be Yellow fever or Malaria. In Yellow fever the virus first infects epithelial cells of the gut, then enters the blood system of the insect to finally end up in the salivary glands, where the virus is injected together with the saliva into the biten person. In Malaria the pathogen is also able to leave the gut region and mature in the salivary glands. HIV can only be transmitted through blood (either through direct transmission, operations etc.), through semen (cum), pre-seminal fluid (pre-cum), rectal fluids, vaginal fluids, and breast milk. See reference 3. References: Why Mosquitoes cannot transmit AIDS Can we get AIDS from mosquito bites? HIV Transmission Risk: A Summary of Evidence
https://api.stackexchange.com
I am working with a set of (bulk) RNA-Seq data collected across multiple runs, run at different times of the year. I have normalized my data using library size / quantile / RUV normalization, and would like to check (quantitatively and/or qualitatively) whether or not normalization has succeeded in removing the batch effects. It is important to note that by "normalization has succeeded", I simply mean that unwanted variation has been removed - further analysis is required to ensure that biological variation has not been removed. What are some plots / statistical tests / software packages which provide a first-step QC for normalization?
You should use box plots and PCA plot. Let's take a look at the RUV paper: Before normalization and after UQ normalization: Libraries do not cluster as expected according to treatment. ... for UQ-normalized counts. UQ normalization does not lead to better clustering of the samples... Before normalization, the medians in the box-plot obviously look very different among replicates. After UQ normalization, the medians look closer but Trt.11 look like an outlier. Furthermore, the treatments aren't clustered on the PCA plot. Since they are replicates, you'd like them be close on the plot. After RUV normalization ... RUVg shrinks the expression measures for Library 11 towards the median across libraries, suggesting robustness against outliers. ... Libraries cluster as expected by treatment. ... The RUV has made the distribution more robust and the samples closer on the PCA plot. However, it's still not perfect as one of the treatments is not close to the other two on the first PC. The vignettes for Bioconductor RUVSeq describes the two functions: plotRLE and plotPCA.
https://api.stackexchange.com
I have finished developing an app for Android and intend to publish it with GPL -- I want it to be open source. However, the nature of the application (a game) is that it asks riddles and has the answers coded into the string resource. I can't publish the answers! I was told to look into storing passwords securely -- but I haven't found anything appropriate. Is it possible to publish my source code with a string array hidden, encrypted, or otherwise obscured? Maybe by reading the answers from an online database? Update Yuval Filmus's solution below worked. When I first read it I was still not sure how to do it. I found some solutions, for the second option: storing the hashed solution in the source and calculating the hash everytime the user guesses. To do this in javascript there is the crypto-js library at For Android, use the MessageDigest function. There is an application (on fdroid/github) called HashPass which does this.
You have at least two options, depending on what problem you want to solve. If you want innocent readers of your code to not get the answers inadvertently, or you at least want to make it a bit difficult so that users are not tempted, you can encrypt the solutions and store the key as part of your code, perhaps a result of some computation (to make it even more difficult). If you want to prevent users from retrieving the answer, you can use a one-way function, or in computer jargon, a hash function. Store a hash of the answer, and they you can test whether the answer is correct without it being possible to deduce the answer at all without finding it first. This has the disadvantage that it is harder to check for an answer that is close to the correct answer, though there are some solutions even to this problem.
https://api.stackexchange.com
I'll be generous and say it might be reasonable to assume that nature would tend to minimize, or maybe even maximize, the integral over time of $T-V$. Okay, fine. You write down the action functional, require that it be a minimum (or maximum), and arrive at the Euler-Lagrange equations. Great. But now you want these Euler-Lagrange equations to not just be derivable from the Principle of Least Action, but you want it to be equivalent to the Principle of Least Action. After thinking about it for awhile, you realize that this implies that the Principle of Least Action isn't really the Principle of Least Action at all: it's the "Principle of Stationary Action". Maybe this is just me, but as generous as I may be, I will not grant you that it is "natural" to assume that nature tends to choose the path that is stationary point of the action functional. Not to mention, it isn't even obvious that there is such a path, or if there is one, that it is unique. But the problems don't stop there. Even if you grant the "Principle of Stationary Action" as fundamentally and universally true, you realize that not all the equations of motions that you would like to have are derivable from this if you restrict yourself to a Lagrangian of the form $T-V$. As far as I can tell, from here it's a matter of playing around until you get a Lagrangian that produces the equations of motion you want. From my (perhaps naive point of view), there is nothing at all particularly natural (although I will admit, it is quite useful) about the formulation of classical mechanics this way. Of course, this wouldn't be such a big deal if these classical ideas stayed with the classical physics, but these ideas are absolutely fundamental to how we think about things as modern as quantum field theory. Could someone please convince me that there is something natural about the choice of the Lagrangian formulation of classical mechanics (I don't mean in comparison with the Hamiltonian formulation; I mean period), and in fact, that it is so natural that we would not even dare abandon these ideas?
Could someone please convince me that there is something natural about the choice of the Lagrangian formulation... If I ask a high school physics student, "I am swinging a ball on a string around my head in a circle. The string is cut. Which way does the ball go?", they will probably tell me that the ball goes straight out - along the direction the string was pointing when it was cut. This is not right; the ball actually goes along a tangent to the circle, not a radius. But the beginning student will probably think this is not natural. How do they lose this instinct? Probably not by one super-awesome explanation. Instead, it's by analyzing more problems, seeing the principles applied in new situations, learning to apply those principles themselves, and gradually, over the course of months or years, building what an undergraduate student considers to be common intuition. So my guess is no, no one can convince you that the Lagrangian formulation is natural. You will be convinced of that as you continue to study more physics, and if you expect to be convinced of it all at once, you are going to be disappointed. It is enough for now that you understand what you've been taught, and it's good that you're thinking about it. But I doubt anyone can quickly change your mind. You'll have to change it for yourself over time. That being said, I think the most intuitive way to approach action principles is through the principle of least (i.e. stationary) time in optics. Try Feynman's QED, which gives a good reason to believe that the principle of stationary time is quite natural. You can go further mathematically by learning the path integral formulation of nonrelativistic quantum mechanics and seeing how it leads to high probability for paths of stationary action. More importantly, just use Lagrangian mechanics as much as possible, and not just finding equations of motion for twenty different systems. Use it to do interesting things. Learn how to see the relationship between symmetries and conservation laws in the Lagrangian approach. Learn about relativity. Learn how to derive electromagnetism from an action principle - first by studying the Lagrangian for a particle in an electromagnetic field, then by studying the electromagnetic field itself as described by a Lagrange density. Try to explain it to someone - their questions will sharpen your understanding. Check out Leonard Susskind's lectures on YouTube (series 1 and 3 especially). They are the most intuitive source I know for this material. Read some of the many questions here in the Lagrangian or Noether tags. See if you can figure out their answers, then read the answers people have provided to compare. If you thought that the Lagrangian approach was wrong, then you might want someone to convince you otherwise. But if you just don't feel comfortable with it yet, you'd be robbing yourself of a great pleasure by not taking the time to learn its intricacies. Finally, your question is very similar to this one, so check out the answers there as well.
https://api.stackexchange.com
I'm about to start working on a software library of numerical ODE solvers, and I'm struggling with how to formulate tests for the solver implementations. My ambition is that the library, eventually, will include solvers for both nonstiff and stiff problems, and at least one implicit solver (more or less on par with the capabilities of the ode routines in Matlab), so the test methodology needs to reflect the various types of problems and criteria for different solvers. My problem now is that I don't know where to begin with this testing. I can think of a few different ways to test the output of an algorithm: Test for a problem that has an analytical solution, and check that the numerical solution is within tolerance levels for all the returned data points. This requires knowledge of a number of analytical problems which exhibit all the properties that I want the different solvers to work with (stiffness, implicit problems etc), which I don't have, at least not off the top of my head. This method tests the results of a solver method. Thus, there is no guarantee that the solver actually works, just that it works for the given test problem. Therefore, I suspect a large number of test problems is needed to confidently verify that the solver works. Manually calculate the solution for a few time steps using the algorithms I intend to implement, and then do the same with the solvers and check that the results are the same. This requires no knowledge of the true solution to the problem, but in turn requires quite a lot of hands-on work. This method, on the other hand, only tests the algorithm, which is fine by me - if someone else has proven that 4th order Runge-Kutta works, I don't feel a desperate need to. However, I do worry that it will be very cumbersome to formulate test cases, as I don't know a good method to generate the test data (except maybe by hand, which will be a lot of work...). Both the above methods have serious limitations for me with my current knowledge - I don't know a good set of test problems for the first one, and I don't know a good method of generating test data for the second. Is there other ways to verify numerical ODE solvers? Are there other criteria on the implementations that should be verified? Are there any good (free) resources on testing ODE solvers out there1? EDIT: Since this question is very broad, I want to clarify a little. The test suite I want to create will fill two main purposes: Verifying that the solvers work as expected, for the problems they're intended to solve. In other words, a solver for non-stiff problems is allowed to go bananas on a stiff problem, but should perform well on non-stiff problems. Also, if there are other solvers in the library that offer higher accuracy, it might not be necessary to enforce very accurate results - just "accurate enough". Thus, part of my question is what tests should be used for what solvers; or, at least, how one should reason to decide that. Sanity test upon installation of the library. These test need not (should not) be elaborate or time-consuming; just the very basics that can be run in under 5 seconds, but that will alert the user if something is off-the-charts weird. Thus, I also need a way to construct tests that are very simple, but that still tell me something about the state of the library. 1 Yes, I've been Googling my eyes out, but most of what I find is lecture notes with very trivial examples, with the notable exception of the CWI ODE test set from Bari which I don't know if, or how, I could use for my purposes, since it treats much more sophisticated solvers than the ones I want to test...
This is a very broad question and I am going to give you some things to think about (some are already included in your post, but they are repeated here for completeness). Scope of Problems You need to define the interface of how to specify problems. Are you going to allow parameters that can be fixed or can vary for solutions? Are you going to allow perturbation parameters to slightly perturb problems and see if they are still solvable (for example, an $\epsilon$ parameter to be defined anywhere) in a specific problem? Are you going to allow infinite precision? Are you going to test for speed and sensitivity to numerical precision? Have you chosen two (maybe more) libraries that already exist to compare results? How will you choose stopping criteria, will you use various methods and let the user select or define their own? Are you going to measure error using various measures and allow the user to turn those on and off? Have you looked at the professional packages like Computer-Algebra-Systems (CAS) and understand all of the options they allow? Are you going to allow displaying of results and/or comparisons and/or plots? Problem Recommendations You need to write a test specification defining the source of problems, the scope of how problems were tested, capturing results and metrics of running the routines. I would certainly look to other libraries already out there for the problems they are using (maybe test files). I would go to college libraries and go through books on ODEs and pull out problems of all types, those with known closed form or numeric only solutions. Case 1: We want as many variations of closed form solution problems as we can get in order to compare exact versus numerical results. Case 2: I would go to every numerical analysis book I can find and capture the worked examples and duplicate them. I would additionally capture the problem sets, particularly the ones that have some pathology that exist in most books (sensitivity to this or that types). Case 3: I would go to different branches of applied math like Physics, Ecology, Biology, Economics, et. al and capture problems from each of those domains to validate that your specification language for problems allows for such examples. Case 4: I would research papers/journals that contain the most useful examples where the particular author had to modify a particular approach to account for some pathology or weirdness or hardness. Case 5: Search the web for additional examples. For stiff, see the references here and peruse them ALL to ferret out test problems. Here are some MATLAB examples to peruse. This is not unique. If you look at the book "Numerical Methods for Unconstrained Optimization and Nonlinear Equations" by Dennis and Schnabel, Appendix B, "Test Problems", you can see how they did it. After developing one of the most beautiful set of algorithms write ups I have ever seen, they threw a collection of problems at it that made if go nuts. You had to tweak here and there! They included five very different and pathological problems that strained the capabilities of the solvers. This has taught me that we can continue to throw problems at algorithms that they are incapable of handling for a host of reasons. Note, they even borrowed this set of problems from More', Garbow and Hillstrom (you can also look up that reference and perhaps there are others you can use as a guide). In other words, this is not a trivial task. You need Known-Answer-test cases that always allow you to test the validity of updates and don't break things. That is, a repeatable and extensive set of problems from low to high, from easy to hard, from possible to impossible, ... You also need a collection of problems that your solvers cannot handle in order to truly understand its limitations.
https://api.stackexchange.com
From Wikipedia, there are three interpretations of the degrees of freedom of a statistic: In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary. Estimates of statistical parameters can be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter is called the degrees of freedom (df). In general, the degrees of freedom of an estimate of a parameter is equal to the number of independent scores that go into the estimate minus the number of parameters used as intermediate steps in the estimation of the parameter itself (which, in sample variance, is one, since the sample mean is the only intermediate step). Mathematically, degrees of freedom is the dimension of the domain of a random vector, or essentially the number of 'free' components: how many components need to be known before the vector is fully determined. The bold words are what I don't quite understand. If possible, some mathematical formulations will help clarify the concept. Also do the three interpretations agree with each other?
This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't the time (and there isn't the space here) to give a full exposition, but I would like to share one approach and an insight that it suggests. Where does the concept of degrees of freedom (DF) arise? The contexts in which it's found in elementary treatments are: The Student t-test and its variants such as the Welch or Satterthwaite solutions to the Behrens-Fisher problem (where two populations have different variances). The Chi-squared distribution (defined as a sum of squares of independent standard Normals), which is implicated in the sampling distribution of the variance. The F-test (of ratios of estimated variances). The Chi-squared test, comprising its uses in (a) testing for independence in contingency tables and (b) testing for goodness of fit of distributional estimates. In spirit, these tests run a gamut from being exact (the Student t-test and F-test for Normal variates) to being good approximations (the Student t-test and the Welch/Satterthwaite tests for not-too-badly-skewed data) to being based on asymptotic approximations (the Chi-squared test). An interesting aspect of some of these is the appearance of non-integral "degrees of freedom" (the Welch/Satterthwaite tests and, as we will see, the Chi-squared test). This is of especial interest because it is the first hint that DF is not any of the things claimed of it. We can dispose right away of some of the claims in the question. Because "final calculation of a statistic" is not well-defined (it apparently depends on what algorithm one uses for the calculation), it can be no more than a vague suggestion and is worth no further criticism. Similarly, neither "number of independent scores that go into the estimate" nor "the number of parameters used as intermediate steps" are well-defined. "Independent pieces of information that go into [an] estimate" is difficult to deal with, because there are two different but intimately related senses of "independent" that can be relevant here. One is independence of random variables; the other is functional independence. As an example of the latter, suppose we collect morphometric measurements of subjects--say, for simplicity, the three side lengths $X$, $Y$, $Z$, surface areas $S=2(XY+YZ+ZX)$, and volumes $V=XYZ$ of a set of wooden blocks. The three side lengths can be considered independent random variables, but all five variables are dependent RVs. The five are also functionally dependent because the codomain (not the "domain"!) of the vector-valued random variable $(X,Y,Z,S,V)$ traces out a three-dimensional manifold in $\mathbb{R}^5$. (Thus, locally at any point $\omega\in\mathbb{R}^5$, there are two functions $f_\omega$ and $g_\omega$ for which $f_\omega(X(\psi),\ldots,V(\psi))=0$ and $g_\omega(X(\psi),\ldots,V(\psi))=0$ for points $\psi$ "near" $\omega$ and the derivatives of $f$ and $g$ evaluated at $\omega$ are linearly independent.) However--here's the kicker--for many probability measures on the blocks, subsets of the variables such as $(X,S,V)$ are dependent as random variables but functionally independent. Having been alerted by these potential ambiguities, let's hold up the Chi-squared goodness of fit test for examination, because (a) it's simple, (b) it's one of the common situations where people really do need to know about DF to get the p-value right and (c) it's often used incorrectly. Here's a brief synopsis of the least controversial application of this test: You have a collection of data values $(x_1, \ldots, x_n)$, considered as a sample of a population. You have estimated some parameters $\theta_1, \ldots, \theta_p$ of a distribution. For example, you estimated the mean $\theta_1$ and standard deviation $\theta_2 = \theta_p$ of a Normal distribution, hypothesizing that the population is normally distributed but not knowing (in advance of obtaining the data) what $\theta_1$ or $\theta_2$ might be. In advance, you created a set of $k$ "bins" for the data. (It may be problematic when the bins are determined by the data, even though this is often done.) Using these bins, the data are reduced to the set of counts within each bin. Anticipating what the true values of $(\theta)$ might be, you have arranged it so (hopefully) each bin will receive approximately the same count. (Equal-probability binning assures the chi-squared distribution really is a good approximation to the true distribution of the chi-squared statistic about to be described.) You have a lot of data--enough to assure that almost all bins ought to have counts of 5 or greater. (This, we hope, will enable the sampling distribution of the $\chi^2$ statistic to be approximated adequately by some $\chi^2$ distribution.) Using the parameter estimates, you can compute the expected count in each bin. The Chi-squared statistic is the sum of the ratios $$\frac{(\text{observed}-\text{expected})^2}{\text{expected}}.$$ This, many authorities tell us, should have (to a very close approximation) a Chi-squared distribution. But there's a whole family of such distributions. They are differentiated by a parameter $\nu$ often referred to as the "degrees of freedom." The standard reasoning about how to determine $\nu$ goes like this I have $k$ counts. That's $k$ pieces of data. But there are (functional) relationships among them. To start with, I know in advance that the sum of the counts must equal $n$. That's one relationship. I estimated two (or $p$, generally) parameters from the data. That's two (or $p$) additional relationships, giving $p+1$ total relationships. Presuming they (the parameters) are all (functionally) independent, that leaves only $k-p-1$ (functionally) independent "degrees of freedom": that's the value to use for $\nu$. The problem with this reasoning (which is the sort of calculation the quotations in the question are hinting at) is that it's wrong except when some special additional conditions hold. Moreover, those conditions have nothing to do with independence (functional or statistical), with numbers of "components" of the data, with the numbers of parameters, nor with anything else referred to in the original question. Let me show you with an example. (To make it as clear as possible, I'm using a small number of bins, but that's not essential.) Let's generate 20 independent and identically distributed (iid) standard Normal variates and estimate their mean and standard deviation with the usual formulas (mean = sum/count, etc.). To test goodness of fit, create four bins with cutpoints at the quartiles of a standard normal: -0.675, 0, +0.657, and use the bin counts to generate a Chi-squared statistic. Repeat as patience allows; I had time to do 10,000 repetitions. The standard wisdom about DF says we have 4 bins and 1+2 = 3 constraints, implying the distribution of these 10,000 Chi-squared statistics should follow a Chi-squared distribution with 1 DF. Here's the histogram: The dark blue line graphs the PDF of a $\chi^2(1)$ distribution--the one we thought would work--while the dark red line graphs that of a $\chi^2(2)$ distribution (which would be a good guess if someone were to tell you that $\nu=1$ is incorrect). Neither fits the data. You might expect the problem to be due to the small size of the data sets ($n$=20) or perhaps the small size of the number of bins. However, the problem persists even with very large datasets and larger numbers of bins: it is not merely a failure to reach an asymptotic approximation. Things went wrong because I violated two requirements of the Chi-squared test: You must use the Maximum Likelihood estimate of the parameters. (This requirement can, in practice, be slightly violated.) You must base that estimate on the counts, not on the actual data! (This is crucial.) The red histogram depicts the chi-squared statistics for 10,000 separate iterations, following these requirements. Sure enough, it visibly follows the $\chi^2(1)$ curve (with an acceptable amount of sampling error), as we had originally hoped. The point of this comparison--which I hope you have seen coming--is that the correct DF to use for computing the p-values depends on many things other than dimensions of manifolds, counts of functional relationships, or the geometry of Normal variates. There is a subtle, delicate interaction between certain functional dependencies, as found in mathematical relationships among quantities, and distributions of the data, their statistics, and the estimators formed from them. Accordingly, it cannot be the case that DF is adequately explainable in terms of the geometry of multivariate normal distributions, or in terms of functional independence, or as counts of parameters, or anything else of this nature. We are led to see, then, that "degrees of freedom" is merely a heuristic that suggests what the sampling distribution of a (t, Chi-squared, or F) statistic ought to be, but it is not dispositive. Belief that it is dispositive leads to egregious errors. (For instance, the top hit on Google when searching "chi squared goodness of fit" is a Web page from an Ivy League university that gets most of this completely wrong! In particular, a simulation based on its instructions shows that the chi-squared value it recommends as having 7 DF actually has 9 DF.) With this more nuanced understanding, it's worthwhile to re-read the Wikipedia article in question: in its details it gets things right, pointing out where the DF heuristic tends to work and where it is either an approximation or does not apply at all. A good account of the phenomenon illustrated here (unexpectedly high DF in Chi-squared GOF tests) appears in Volume II of Kendall & Stuart, 5th edition. I am grateful for the opportunity afforded by this question to lead me back to this wonderful text, which is full of such useful analyses. Edit (Jan 2017) Here is R code to produce the figure following "The standard wisdom about DF..." # # Simulate data, one iteration per column of `x`. # n <- 20 n.sim <- 1e4 bins <- qnorm(seq(0, 1, 1/4)) x <- matrix(rnorm(n*n.sim), nrow=n) # # Compute statistics. # m <- colMeans(x) s <- apply(sweep(x, 2, m), 2, sd) counts <- apply(matrix(as.numeric(cut(x, bins)), nrow=n), 2, tabulate, nbins=4) expectations <- mapply(function(m,s) n*diff(pnorm(bins, m, s)), m, s) chisquared <- colSums((counts - expectations)^2 / expectations) # # Plot histograms of means, variances, and chi-squared stats. The first # two confirm all is working as expected. # mfrow <- par("mfrow") par(mfrow=c(1,3)) red <- "#a04040" # Intended to show correct distributions blue <- "#404090" # To show the putative chi-squared distribution hist(m, freq=FALSE) curve(dnorm(x, sd=1/sqrt(n)), add=TRUE, col=red, lwd=2) hist(s^2, freq=FALSE) curve(dchisq(x*(n-1), df=n-1)*(n-1), add=TRUE, col=red, lwd=2) hist(chisquared, freq=FALSE, breaks=seq(0, ceiling(max(chisquared)), 1/4), xlim=c(0, 13), ylim=c(0, 0.55), col="#c0c0ff", border="#404040") curve(ifelse(x <= 0, Inf, dchisq(x, df=2)), add=TRUE, col=red, lwd=2) curve(ifelse(x <= 0, Inf, dchisq(x, df=1)), add=TRUE, col=blue, lwd=2) par(mfrow=mfrow)
https://api.stackexchange.com
Consider this picture of sun beams streaming onto the valley through the clouds. Given that the valley is only (at a guess) 3km wide, with simple trigonometry and the angles of the beams, this gives the result that the position of the light source is being a few tens of km away at most. What is wrong with the analysis?
This picture (source) should pretty much answer your question: The train's destination is not above the ground, but rather far away, and perspective means that the tracks appear not to be parallel but instead to converge to the vanishing point. The same applies to the beams of light above them. The Sun is very far away and the beams are pretty much parallel, but they're pointing towards you, and perspective makes them appear to converge towards the vanishing point - which in this case is the Sun's location in the sky. The technical term for these beams is "crepuscular rays." Occasionally, when the Sun is very low on the horizon, you can see "anticrepuscular rays," where the beams seem to converge to a different point on the opposite side of the sky to the Sun. Here's an example: (source) This happens for the same reason - the rays are really parallel, and there's another vanishing point in the opposite direction from the Sun.
https://api.stackexchange.com
I've found on multiple sites that convolution and cross-correlation are similar (including the tag wiki for convolution), but I didn't find anywhere how they differ. What is the difference between the two? Can you say that autocorrelation is also a kind of a convolution?
The only difference between cross-correlation and convolution is a time reversal on one of the inputs. Discrete convolution and cross-correlation are defined as follows (for real signals; I neglected the conjugates needed when the signals are complex): $$ x[n] * h[n] = \sum_{k=0}^{\infty}h[k] x[n-k] $$ $$ corr(x[n],h[n]) = \sum_{k=0}^{\infty}h[k] x[n+k] $$ This implies that you can use fast convolution algorithms like overlap-save to implement cross-correlation efficiently; just time reverse one of the input signals first. Autocorrelation is identical to the above, except $h[n] = x[n]$, so you can view it as related to convolution in the same way. Edit: Since someone else just asked a duplicate question, I've been inspired to add one more piece of information: if you implement correlation in the frequency domain using a fast convolution algorithm like overlap-save, you can avoid the hassle of time-reversing one of the signals first by instead conjugating one of the signals in the frequency domain. It can be shown that conjugation in the frequency domain is equivalent to reversal in the time domain.
https://api.stackexchange.com
If I have a signal that is time limited, say a sinusoid that only lasts for $T$ seconds, and I take the FFT of that signal, I see the frequency response. In the example this would be a spike at the sinusoid's main frequency. Now, say I take that same time signal and delay it by some time constant and then take the FFT, how do things change? Is the FFT able to represent that time delay? I recognize that a time delay represents a $\exp(-j\omega t)$ change in the frequency domain, but I'm having a hard time determining what that actually means. Practically speaking, is the frequency domain an appropriate place to determine the time delay between various signals?
The discrete Fourier transform (DFT), commonly implemented by the fast Fourier transform (FFT), maps a finite-length sequence of discrete time-domain samples into an equal-length sequence of frequency-domain samples. The samples in the frequency domain are in general complex numbers; they represent coefficients that can be used in a weighted sum of complex exponential functions in the time domain to reconstruct the original time-domain signal. These complex numbers represent an amplitude and phase that is associated with each exponential function. Thus, each number in the FFT output sequence can be interpreted as: $$ X[k] = \sum_{n=0}^{N-1} x[n] e^{\frac{-j 2 \pi n k}{N}} = A_k e^{j \phi_k} $$ You can interpret this as follows: if you want to reconstruct x[n], the signal that you started with, you can take a bunch of complex exponential functions $e^{\frac{j 2 \pi n k}{N}}, k = 0, 1, \ldots, N-1$, weight each one by $X[k] = A_k e^{j \phi_k}$, and sum them. The result is exactly equal (within numerical precision) to $x[n]$. This is just a word-based definition of the inverse DFT. So, speaking to your question, the various flavors of the Fourier transform have the property that a delay in the time domain maps to a phase shift in the frequency domain. For the DFT, this property is: $$ x[n] \leftrightarrow X[k] $$ $$ x[n-D] \leftrightarrow e^{\frac{-j2 \pi k D}{N}}X[k] $$ That is, if you delay your input signal by $D$ samples, then each complex value in the FFT of the signal is multiplied by the constant $e^{\frac{-j2 \pi k D}{N}}$. It's common for people to not realize that the outputs of the DFT/FFT are complex values, because they are often visualized as magnitudes only (or sometimes as magnitude and phase). Edit: I want to point out that there are some subtleties to this rule for the DFT due to its finiteness in time coverage. Specifically, the shift in your signal must be circular for the relation to hold; that is, when you delay $x[n]$ by $D$ samples, you need to wrap the last $D$ samples that were at the end of $x[n]$ to the front of the delayed signal. This wouldn't really match what you would see in a real situation where the signal just doesn't start until after the beginning of the DFT aperture (and is preceded by zeros, for example). You can always get around this by zero-padding the original signal $x[n]$ so that when you delay by $D$ samples, you just wrap around zeros to the front anyway. This relationship only applies to the DFT since it is finite in time; it's does not apply to the classic Fourier transform or discrete-time Fourier transform.
https://api.stackexchange.com
Many machine learning classifiers (e.g. support vector machines) allow one to specify a kernel. What would be an intuitive way of explaining what a kernel is? One aspect I have been thinking of is the distinction between linear and non-linear kernels. In simple terms, I could speak of 'linear decision functions' an 'non-linear decision functions'. However, I am not sure if calling a kernel a 'decision function' is a good idea. Suggestions?
Kernel is a way of computing the dot product of two vectors $\mathbf x$ and $\mathbf y$ in some (possibly very high dimensional) feature space, which is why kernel functions are sometimes called "generalized dot product". Suppose we have a mapping $\varphi \, : \, \mathbb R^n \to \mathbb R^m$ that brings our vectors in $\mathbb R^n$ to some feature space $\mathbb R^m$. Then the dot product of $\mathbf x$ and $\mathbf y$ in this space is $\varphi(\mathbf x)^T \varphi(\mathbf y)$. A kernel is a function $k$ that corresponds to this dot product, i.e. $k(\mathbf x, \mathbf y) = \varphi(\mathbf x)^T \varphi(\mathbf y)$. Why is this useful? Kernels give a way to compute dot products in some feature space without even knowing what this space is and what is $\varphi$. For example, consider a simple polynomial kernel $k(\mathbf x, \mathbf y) = (1 + \mathbf x^T \mathbf y)^2$ with $\mathbf x, \mathbf y \in \mathbb R^2$. This doesn't seem to correspond to any mapping function $\varphi$, it's just a function that returns a real number. Assuming that $\mathbf x = (x_1, x_2)$ and $\mathbf y = (y_1, y_2)$, let's expand this expression: $\begin{align} k(\mathbf x, \mathbf y) & = (1 + \mathbf x^T \mathbf y)^2 = (1 + x_1 \, y_1 + x_2 \, y_2)^2 = \\ & = 1 + x_1^2 y_1^2 + x_2^2 y_2^2 + 2 x_1 y_1 + 2 x_2 y_2 + 2 x_1 x_2 y_1 y_2 \end{align}$ Note that this is nothing else but a dot product between two vectors $(1, x_1^2, x_2^2, \sqrt{2} x_1, \sqrt{2} x_2, \sqrt{2} x_1 x_2)$ and $(1, y_1^2, y_2^2, \sqrt{2} y_1, \sqrt{2} y_2, \sqrt{2} y_1 y_2)$, and $\varphi(\mathbf x) = \varphi(x_1, x_2) = (1, x_1^2, x_2^2, \sqrt{2} x_1, \sqrt{2} x_2, \sqrt{2} x_1 x_2)$. So the kernel $k(\mathbf x, \mathbf y) = (1 + \mathbf x^T \mathbf y)^2 = \varphi(\mathbf x)^T \varphi(\mathbf y)$ computes a dot product in 6-dimensional space without explicitly visiting this space. Another example is Gaussian kernel $k(\mathbf x, \mathbf y) = \exp\big(- \gamma \, \|\mathbf x - \mathbf y\|^2 \big)$. If we Taylor-expand this function, we'll see that it corresponds to an infinite-dimensional codomain of $\varphi$. Finally, I'd recommend an online course "Learning from Data" by Professor Yaser Abu-Mostafa as a good introduction to kernel-based methods. Specifically, lectures "Support Vector Machines", "Kernel Methods" and "Radial Basis Functions" are about kernels.
https://api.stackexchange.com